2026-03-10T14:40:39.072 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T14:40:39.076 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T14:40:39.097 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} email: null first_in_suite: false flavor: default job_id: '1070' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - but it is still running - overall HEALTH_ - \(OSDMAP_FLAGS\) - \(PG_ - \(OSD_ - \(OBJECT_ - \(POOL_APP_NOT_ENABLED\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: extra_system_packages: deb: - python3-pytest rpm: - python3-pytest flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOX0lz7T32V2ZbKnStOZgHNcdowPj3sBk+YC1VGwB/3Lg+3bFIdjw+v+dNL4L2pHwivM3wNg3GP/JZiKFU+f8ho= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKzgyjXQZiEh1I1tSYhqNETEaSZDHg5K40UmwM0K+cSGd6OL8Xm/OSbMdtGuyjgOiJwxxGK2tCTwMD3cZt9hDBw= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test_python.sh timeout: 1h teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T14:40:39.097 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T14:40:39.098 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T14:40:39.098 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T14:40:39.098 INFO:teuthology.task.internal:Checking packages... 2026-03-10T14:40:39.098 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T14:40:39.098 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T14:40:39.098 INFO:teuthology.packaging:ref: None 2026-03-10T14:40:39.098 INFO:teuthology.packaging:tag: None 2026-03-10T14:40:39.098 INFO:teuthology.packaging:branch: squid 2026-03-10T14:40:39.098 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:40:39.099 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T14:40:39.706 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:40:39.707 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T14:40:39.708 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T14:40:39.708 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T14:40:39.708 INFO:teuthology.task.internal:Saving configuration 2026-03-10T14:40:39.712 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T14:40:39.713 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T14:40:39.720 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 14:39:35.570655', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOX0lz7T32V2ZbKnStOZgHNcdowPj3sBk+YC1VGwB/3Lg+3bFIdjw+v+dNL4L2pHwivM3wNg3GP/JZiKFU+f8ho='} 2026-03-10T14:40:39.727 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 14:39:35.571050', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKzgyjXQZiEh1I1tSYhqNETEaSZDHg5K40UmwM0K+cSGd6OL8Xm/OSbMdtGuyjgOiJwxxGK2tCTwMD3cZt9hDBw='} 2026-03-10T14:40:39.727 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T14:40:39.728 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T14:40:39.728 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-10T14:40:39.728 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T14:40:39.734 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T14:40:39.741 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-10T14:40:39.741 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fb241383eb0>, signals=[15]) 2026-03-10T14:40:39.741 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T14:40:39.742 INFO:teuthology.task.internal:Opening connections... 2026-03-10T14:40:39.742 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T14:40:39.743 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:40:39.800 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-10T14:40:39.800 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:40:39.858 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T14:40:39.860 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T14:40:39.870 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T14:40:39.870 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T14:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-10T14:40:39.916 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T14:40:39.920 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-10T14:40:39.923 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-10T14:40:39.923 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T14:40:39.970 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-10T14:40:39.971 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-10T14:40:39.975 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T14:40:39.977 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T14:40:39.978 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T14:40:39.978 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T14:40:39.980 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-10T14:40:40.014 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T14:40:40.015 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T14:40:40.015 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T14:40:40.025 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-10T14:40:40.027 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T14:40:40.058 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T14:40:40.059 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T14:40:40.071 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T14:40:40.074 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:40:40.450 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-10T14:40:40.453 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:40:40.671 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T14:40:40.672 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T14:40:40.673 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T14:40:40.674 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T14:40:40.677 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T14:40:40.679 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T14:40:40.680 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T14:40:40.680 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T14:40:40.720 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T14:40:40.726 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T14:40:40.727 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T14:40:40.727 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T14:40:40.766 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:40:40.766 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T14:40:40.769 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:40:40.770 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T14:40:40.808 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T14:40:40.815 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:40:40.819 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:40:40.820 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:40:40.823 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:40:40.824 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T14:40:40.825 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T14:40:40.825 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T14:40:40.865 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T14:40:40.875 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T14:40:40.877 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T14:40:40.877 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T14:40:40.912 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T14:40:40.918 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:40:40.958 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:40:41.002 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:40:41.002 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T14:40:41.051 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:40:41.054 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:40:41.098 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:40:41.098 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T14:40:41.146 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T14:40:41.148 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-10T14:40:41.202 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T14:40:41.203 INFO:teuthology.task.internal:Starting timer... 2026-03-10T14:40:41.203 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T14:40:41.206 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T14:40:41.208 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T14:40:41.208 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-10T14:40:41.208 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T14:40:41.208 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T14:40:41.208 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T14:40:41.208 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T14:40:41.209 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T14:40:41.210 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T14:40:41.211 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T14:40:41.726 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T14:40:41.732 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T14:40:41.732 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryfn6bg6h9 --limit vm00.local,vm03.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T14:42:49.984 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm03.local')] 2026-03-10T14:42:49.985 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T14:42:49.985 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:42:50.044 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T14:42:50.277 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T14:42:50.277 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-10T14:42:50.277 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:42:50.338 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-10T14:42:50.417 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-10T14:42:50.418 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T14:42:50.421 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T14:42:50.421 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T14:42:50.421 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:42:50.422 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T14:42:50.422 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Command line: ntpd -gq 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: ---------------------------------------------------- 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: corporation. Support and training for ntp-4 are 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: available at https://www.nwtime.org/support 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: ---------------------------------------------------- 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: proto: precision = 0.030 usec (-25) 2026-03-10T14:42:50.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: basedate set to 2022-02-04 2026-03-10T14:42:50.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: gps base set to 2022-02-06 (week 2196) 2026-03-10T14:42:50.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T14:42:50.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T14:42:50.443 INFO:teuthology.orchestra.run.vm00.stderr:10 Mar 14:42:50 ntpd[16101]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen normally on 4 lo [::1]:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-10T14:42:50.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:50 ntpd[16101]: Listening on routing socket on fd #22 for interface updates 2026-03-10T14:42:50.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Command line: ntpd -gq 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: ---------------------------------------------------- 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: corporation. Support and training for ntp-4 are 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: available at https://www.nwtime.org/support 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: ---------------------------------------------------- 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: proto: precision = 0.029 usec (-25) 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: basedate set to 2022-02-04 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: gps base set to 2022-02-06 (week 2196) 2026-03-10T14:42:50.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T14:42:50.478 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T14:42:50.478 INFO:teuthology.orchestra.run.vm03.stderr:10 Mar 14:42:50 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen normally on 4 lo [::1]:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-10T14:42:50.479 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:50 ntpd[16105]: Listening on routing socket on fd #22 for interface updates 2026-03-10T14:42:51.444 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:51 ntpd[16101]: Soliciting pool server 90.187.112.137 2026-03-10T14:42:51.478 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:51 ntpd[16105]: Soliciting pool server 90.187.112.137 2026-03-10T14:42:52.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:52 ntpd[16101]: Soliciting pool server 93.177.65.20 2026-03-10T14:42:52.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:52 ntpd[16101]: Soliciting pool server 212.132.97.26 2026-03-10T14:42:52.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:52 ntpd[16105]: Soliciting pool server 93.177.65.20 2026-03-10T14:42:52.477 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:52 ntpd[16105]: Soliciting pool server 212.132.97.26 2026-03-10T14:42:53.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:53 ntpd[16101]: Soliciting pool server 51.75.67.47 2026-03-10T14:42:53.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:53 ntpd[16101]: Soliciting pool server 157.90.24.29 2026-03-10T14:42:53.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:53 ntpd[16101]: Soliciting pool server 188.245.170.46 2026-03-10T14:42:53.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:53 ntpd[16105]: Soliciting pool server 51.75.67.47 2026-03-10T14:42:53.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:53 ntpd[16105]: Soliciting pool server 157.90.24.29 2026-03-10T14:42:53.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:53 ntpd[16105]: Soliciting pool server 188.245.170.46 2026-03-10T14:42:54.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:54 ntpd[16101]: Soliciting pool server 185.232.69.65 2026-03-10T14:42:54.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:54 ntpd[16101]: Soliciting pool server 195.201.125.53 2026-03-10T14:42:54.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:54 ntpd[16101]: Soliciting pool server 116.203.244.102 2026-03-10T14:42:54.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:54 ntpd[16101]: Soliciting pool server 62.113.219.231 2026-03-10T14:42:54.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:54 ntpd[16105]: Soliciting pool server 185.232.69.65 2026-03-10T14:42:54.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:54 ntpd[16105]: Soliciting pool server 195.201.125.53 2026-03-10T14:42:54.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:54 ntpd[16105]: Soliciting pool server 116.203.244.102 2026-03-10T14:42:54.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:54 ntpd[16105]: Soliciting pool server 62.113.219.231 2026-03-10T14:42:55.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:55 ntpd[16101]: Soliciting pool server 185.233.107.180 2026-03-10T14:42:55.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:55 ntpd[16101]: Soliciting pool server 213.239.234.28 2026-03-10T14:42:55.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:55 ntpd[16101]: Soliciting pool server 168.119.211.223 2026-03-10T14:42:55.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:55 ntpd[16101]: Soliciting pool server 185.125.190.58 2026-03-10T14:42:55.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:55 ntpd[16105]: Soliciting pool server 185.233.107.180 2026-03-10T14:42:55.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:55 ntpd[16105]: Soliciting pool server 213.239.234.28 2026-03-10T14:42:55.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:55 ntpd[16105]: Soliciting pool server 168.119.211.223 2026-03-10T14:42:55.476 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:55 ntpd[16105]: Soliciting pool server 185.125.190.58 2026-03-10T14:42:56.442 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:56 ntpd[16101]: Soliciting pool server 185.125.190.57 2026-03-10T14:42:56.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:56 ntpd[16101]: Soliciting pool server 3.121.254.221 2026-03-10T14:42:56.443 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:56 ntpd[16101]: Soliciting pool server 172.104.134.72 2026-03-10T14:42:56.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:56 ntpd[16105]: Soliciting pool server 185.125.190.57 2026-03-10T14:42:56.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:56 ntpd[16105]: Soliciting pool server 3.121.254.221 2026-03-10T14:42:56.475 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:56 ntpd[16105]: Soliciting pool server 172.104.134.72 2026-03-10T14:42:58.474 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 14:42:58 ntpd[16101]: ntpd: time slew +0.004035 s 2026-03-10T14:42:58.474 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew +0.004035s 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:58.495 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.502 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 14:42:59 ntpd[16105]: ntpd: time slew +0.012664 s 2026-03-10T14:42:59.502 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew +0.012664s 2026-03-10T14:42:59.525 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T14:42:59.525 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T14:42:59.525 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.525 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.526 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.526 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.526 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T14:42:59.526 INFO:teuthology.run_tasks:Running task install... 2026-03-10T14:42:59.529 DEBUG:teuthology.task.install:project ceph 2026-03-10T14:42:59.529 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'extra_system_packages': {'deb': ['python3-pytest'], 'rpm': ['python3-pytest']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T14:42:59.529 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T14:42:59.529 INFO:teuthology.task.install:Using flavor: default 2026-03-10T14:42:59.532 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T14:42:59.532 INFO:teuthology.task.install:extra packages: [] 2026-03-10T14:42:59.532 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-key list | grep Ceph 2026-03-10T14:42:59.532 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-10T14:42:59.573 INFO:teuthology.orchestra.run.vm00.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T14:42:59.593 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T14:42:59.593 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T14:42:59.593 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T14:42:59.593 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-pytest, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T14:42:59.594 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:42:59.618 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T14:42:59.641 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T14:42:59.641 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T14:42:59.642 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T14:42:59.642 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-pytest, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T14:42:59.642 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:43:00.207 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T14:43:00.207 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:00.276 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T14:43:00.276 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:00.760 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:43:00.805 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T14:43:00.807 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:43:00.807 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T14:43:00.813 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T14:43:00.814 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T14:43:01.011 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T14:43:01.012 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T14:43:01.022 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T14:43:01.028 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T14:43:01.394 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T14:43:01.480 INFO:teuthology.orchestra.run.vm00.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T14:43:01.481 INFO:teuthology.orchestra.run.vm03.stdout:Ign:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T14:43:01.492 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T14:43:01.548 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T14:43:01.596 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T14:43:01.597 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T14:43:01.598 INFO:teuthology.orchestra.run.vm03.stdout:Hit:6 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T14:43:01.711 INFO:teuthology.orchestra.run.vm00.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T14:43:01.714 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T14:43:01.827 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T14:43:01.829 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T14:43:01.916 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (27.7 kB/s) 2026-03-10T14:43:02.025 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 25.8 kB in 1s (24.8 kB/s) 2026-03-10T14:43:02.644 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T14:43:02.660 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:02.700 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T14:43:02.735 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T14:43:02.756 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:02.800 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T14:43:02.935 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T14:43:02.936 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T14:43:02.952 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T14:43:02.953 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout:The following additional packages will be installed: 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T14:43:03.067 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T14:43:03.068 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T14:43:03.068 INFO:teuthology.orchestra.run.vm00.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout:Suggested packages: 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: smart-notifier mailx | mailutils 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout:Recommended packages: 2026-03-10T14:43:03.069 INFO:teuthology.orchestra.run.vm00.stdout: btrfs-tools 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T14:43:03.108 INFO:teuthology.orchestra.run.vm00.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: socat unzip xmlstarlet zip 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be upgraded: 2026-03-10T14:43:03.109 INFO:teuthology.orchestra.run.vm00.stdout: librados2 librbd1 2026-03-10T14:43:03.201 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T14:43:03.202 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T14:43:03.202 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T14:43:03.202 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T14:43:03.203 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm00.stdout:2 upgraded, 107 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 178 MB of archives. 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-10T14:43:03.204 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-10T14:43:03.243 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T14:43:03.243 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T14:43:03.247 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-10T14:43:03.248 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-10T14:43:03.249 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-10T14:43:03.251 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T14:43:03.277 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T14:43:03.278 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T14:43:03.292 INFO:teuthology.orchestra.run.vm00.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T14:43:03.294 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T14:43:03.295 INFO:teuthology.orchestra.run.vm00.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T14:43:03.295 INFO:teuthology.orchestra.run.vm00.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T14:43:03.296 INFO:teuthology.orchestra.run.vm00.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T14:43:03.300 INFO:teuthology.orchestra.run.vm00.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T14:43:03.300 INFO:teuthology.orchestra.run.vm00.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T14:43:03.301 INFO:teuthology.orchestra.run.vm00.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T14:43:03.302 INFO:teuthology.orchestra.run.vm00.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T14:43:03.305 INFO:teuthology.orchestra.run.vm00.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T14:43:03.307 INFO:teuthology.orchestra.run.vm00.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T14:43:03.308 INFO:teuthology.orchestra.run.vm00.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T14:43:03.309 INFO:teuthology.orchestra.run.vm00.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T14:43:03.310 INFO:teuthology.orchestra.run.vm00.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T14:43:03.313 INFO:teuthology.orchestra.run.vm00.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T14:43:03.313 INFO:teuthology.orchestra.run.vm00.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T14:43:03.314 INFO:teuthology.orchestra.run.vm00.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T14:43:03.314 INFO:teuthology.orchestra.run.vm00.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T14:43:03.314 INFO:teuthology.orchestra.run.vm00.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T14:43:03.321 INFO:teuthology.orchestra.run.vm00.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T14:43:03.321 INFO:teuthology.orchestra.run.vm00.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T14:43:03.321 INFO:teuthology.orchestra.run.vm00.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T14:43:03.323 INFO:teuthology.orchestra.run.vm00.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T14:43:03.323 INFO:teuthology.orchestra.run.vm00.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T14:43:03.328 INFO:teuthology.orchestra.run.vm00.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T14:43:03.329 INFO:teuthology.orchestra.run.vm00.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T14:43:03.329 INFO:teuthology.orchestra.run.vm00.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T14:43:03.330 INFO:teuthology.orchestra.run.vm00.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T14:43:03.330 INFO:teuthology.orchestra.run.vm00.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T14:43:03.336 INFO:teuthology.orchestra.run.vm00.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T14:43:03.336 INFO:teuthology.orchestra.run.vm00.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T14:43:03.340 INFO:teuthology.orchestra.run.vm00.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T14:43:03.340 INFO:teuthology.orchestra.run.vm00.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T14:43:03.341 INFO:teuthology.orchestra.run.vm00.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T14:43:03.343 INFO:teuthology.orchestra.run.vm00.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T14:43:03.344 INFO:teuthology.orchestra.run.vm00.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T14:43:03.346 INFO:teuthology.orchestra.run.vm00.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T14:43:03.347 INFO:teuthology.orchestra.run.vm00.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T14:43:03.348 INFO:teuthology.orchestra.run.vm00.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T14:43:03.351 INFO:teuthology.orchestra.run.vm00.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T14:43:03.352 INFO:teuthology.orchestra.run.vm00.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T14:43:03.381 INFO:teuthology.orchestra.run.vm00.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T14:43:03.382 INFO:teuthology.orchestra.run.vm00.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T14:43:03.383 INFO:teuthology.orchestra.run.vm00.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T14:43:03.395 INFO:teuthology.orchestra.run.vm00.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T14:43:03.395 INFO:teuthology.orchestra.run.vm00.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T14:43:03.395 INFO:teuthology.orchestra.run.vm00.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T14:43:03.396 INFO:teuthology.orchestra.run.vm00.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T14:43:03.396 INFO:teuthology.orchestra.run.vm00.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T14:43:03.396 INFO:teuthology.orchestra.run.vm00.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T14:43:03.398 INFO:teuthology.orchestra.run.vm00.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T14:43:03.399 INFO:teuthology.orchestra.run.vm00.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T14:43:03.400 INFO:teuthology.orchestra.run.vm00.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T14:43:03.406 INFO:teuthology.orchestra.run.vm00.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T14:43:03.409 INFO:teuthology.orchestra.run.vm00.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T14:43:03.411 INFO:teuthology.orchestra.run.vm00.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T14:43:03.412 INFO:teuthology.orchestra.run.vm00.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T14:43:03.412 INFO:teuthology.orchestra.run.vm00.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T14:43:03.416 INFO:teuthology.orchestra.run.vm00.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T14:43:03.417 INFO:teuthology.orchestra.run.vm00.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T14:43:03.419 INFO:teuthology.orchestra.run.vm00.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T14:43:03.420 INFO:teuthology.orchestra.run.vm00.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T14:43:03.420 INFO:teuthology.orchestra.run.vm00.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T14:43:03.422 INFO:teuthology.orchestra.run.vm00.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T14:43:03.424 INFO:teuthology.orchestra.run.vm00.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T14:43:03.424 INFO:teuthology.orchestra.run.vm00.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T14:43:03.431 INFO:teuthology.orchestra.run.vm00.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T14:43:03.431 INFO:teuthology.orchestra.run.vm00.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T14:43:03.431 INFO:teuthology.orchestra.run.vm00.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T14:43:03.433 INFO:teuthology.orchestra.run.vm00.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T14:43:03.433 INFO:teuthology.orchestra.run.vm00.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T14:43:03.451 INFO:teuthology.orchestra.run.vm00.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T14:43:03.707 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T14:43:03.707 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-10T14:43:03.707 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T14:43:03.707 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T14:43:03.726 INFO:teuthology.orchestra.run.vm00.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T14:43:03.837 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T14:43:04.152 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T14:43:04.167 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T14:43:04.258 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T14:43:04.524 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T14:43:04.541 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T14:43:04.562 INFO:teuthology.orchestra.run.vm00.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T14:43:04.577 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T14:43:04.586 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T14:43:04.589 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T14:43:04.590 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T14:43:04.591 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T14:43:04.613 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T14:43:04.618 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T14:43:04.622 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T14:43:04.700 INFO:teuthology.orchestra.run.vm00.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T14:43:04.715 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T14:43:04.715 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T14:43:04.717 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T14:43:04.720 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T14:43:04.722 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T14:43:04.723 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T14:43:04.724 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T14:43:04.724 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T14:43:04.730 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T14:43:04.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T14:43:04.816 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T14:43:04.816 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T14:43:04.859 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T14:43:04.871 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T14:43:04.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T14:43:04.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T14:43:04.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T14:43:04.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T14:43:04.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T14:43:04.912 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T14:43:04.912 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T14:43:04.912 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T14:43:04.915 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T14:43:04.915 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T14:43:04.916 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T14:43:04.917 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T14:43:04.931 INFO:teuthology.orchestra.run.vm00.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T14:43:04.933 INFO:teuthology.orchestra.run.vm00.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T14:43:04.933 INFO:teuthology.orchestra.run.vm00.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T14:43:04.934 INFO:teuthology.orchestra.run.vm00.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T14:43:04.934 INFO:teuthology.orchestra.run.vm00.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T14:43:04.936 INFO:teuthology.orchestra.run.vm00.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T14:43:05.008 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T14:43:05.008 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T14:43:05.010 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T14:43:05.106 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T14:43:05.106 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T14:43:05.232 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T14:43:05.232 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T14:43:05.232 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T14:43:05.233 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T14:43:05.233 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T14:43:05.234 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T14:43:05.235 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T14:43:05.236 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T14:43:05.297 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T14:43:05.332 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T14:43:05.487 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T14:43:05.488 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T14:43:05.488 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T14:43:05.527 INFO:teuthology.orchestra.run.vm00.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T14:43:05.531 INFO:teuthology.orchestra.run.vm00.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T14:43:05.544 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T14:43:05.545 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T14:43:05.546 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T14:43:05.581 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T14:43:05.581 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T14:43:05.582 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T14:43:05.582 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T14:43:05.583 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T14:43:05.583 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T14:43:05.624 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T14:43:05.626 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T14:43:05.628 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T14:43:05.638 INFO:teuthology.orchestra.run.vm00.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T14:43:05.720 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T14:43:05.724 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T14:43:05.727 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T14:43:05.727 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T14:43:05.728 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T14:43:05.733 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T14:43:05.734 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T14:43:05.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T14:43:05.816 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T14:43:05.943 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T14:43:06.006 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T14:43:06.007 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T14:43:06.007 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T14:43:06.045 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T14:43:06.103 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T14:43:06.104 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T14:43:06.105 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T14:43:06.105 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T14:43:06.200 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T14:43:06.877 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T14:43:07.122 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T14:43:07.143 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T14:43:07.163 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T14:43:07.241 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T14:43:07.672 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T14:43:09.189 INFO:teuthology.orchestra.run.vm00.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T14:43:09.499 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T14:43:09.499 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T14:43:09.560 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T14:43:09.730 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T14:43:09.757 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T14:43:09.784 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T14:43:09.869 INFO:teuthology.orchestra.run.vm00.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T14:43:09.912 INFO:teuthology.orchestra.run.vm00.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T14:43:09.925 INFO:teuthology.orchestra.run.vm00.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T14:43:09.944 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T14:43:10.060 INFO:teuthology.orchestra.run.vm00.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T14:43:10.600 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T14:43:10.600 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T14:43:11.099 INFO:teuthology.orchestra.run.vm00.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T14:43:14.215 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T14:43:14.219 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T14:43:14.221 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T14:43:14.527 INFO:teuthology.orchestra.run.vm00.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T14:43:14.531 INFO:teuthology.orchestra.run.vm00.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T14:43:14.682 INFO:teuthology.orchestra.run.vm00.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T14:43:14.974 INFO:teuthology.orchestra.run.vm00.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T14:43:15.053 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T14:43:15.087 INFO:teuthology.orchestra.run.vm00.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T14:43:15.097 INFO:teuthology.orchestra.run.vm00.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T14:43:15.373 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 12s (15.1 MB/s) 2026-03-10T14:43:15.394 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T14:43:15.430 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T14:43:15.433 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T14:43:15.434 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T14:43:15.457 INFO:teuthology.orchestra.run.vm00.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T14:43:15.459 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T14:43:15.465 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T14:43:15.466 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T14:43:15.487 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T14:43:15.493 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T14:43:15.494 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T14:43:15.593 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T14:43:15.600 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:15.639 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:15.693 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T14:43:15.700 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:15.701 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:15.803 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T14:43:15.810 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:15.811 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:15.848 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T14:43:15.853 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T14:43:15.855 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T14:43:15.883 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:15.888 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T14:43:15.975 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:15.978 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T14:43:16.074 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-10T14:43:16.077 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T14:43:16.078 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T14:43:16.098 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T14:43:16.102 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.103 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.133 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-10T14:43:16.140 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.141 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.162 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T14:43:16.168 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:16.170 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.185 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T14:43:16.191 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.192 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.211 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T14:43:16.217 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:16.218 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.241 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T14:43:16.247 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T14:43:16.248 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T14:43:16.269 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T14:43:16.276 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T14:43:16.277 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T14:43:16.295 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T14:43:16.301 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.302 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.325 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T14:43:16.331 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T14:43:16.332 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T14:43:16.358 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T14:43:16.363 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T14:43:16.364 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T14:43:16.386 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T14:43:16.392 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T14:43:16.393 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T14:43:16.414 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-10T14:43:16.421 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T14:43:16.422 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T14:43:16.444 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-10T14:43:16.450 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T14:43:16.451 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T14:43:16.464 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-10T14:43:16.470 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T14:43:16.471 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T14:43:16.489 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-10T14:43:16.494 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T14:43:16.495 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T14:43:16.515 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-10T14:43:16.520 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T14:43:16.521 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T14:43:16.580 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-10T14:43:16.586 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.587 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.714 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T14:43:16.723 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.724 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.747 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T14:43:16.755 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T14:43:16.756 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T14:43:16.776 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T14:43:16.782 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.785 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:16.809 INFO:teuthology.orchestra.run.vm00.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T14:43:16.812 INFO:teuthology.orchestra.run.vm00.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T14:43:16.816 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-10T14:43:16.822 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:16.823 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:17.510 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-10T14:43:17.515 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:17.521 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:17.641 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T14:43:17.647 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T14:43:17.647 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T14:43:17.666 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T14:43:17.671 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T14:43:17.672 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T14:43:17.695 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T14:43:17.701 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T14:43:17.702 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T14:43:17.722 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T14:43:17.728 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T14:43:17.729 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T14:43:17.749 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T14:43:17.754 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T14:43:17.755 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T14:43:17.772 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T14:43:17.778 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T14:43:17.779 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T14:43:17.800 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-10T14:43:17.806 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T14:43:17.807 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T14:43:17.828 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T14:43:17.835 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T14:43:17.836 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T14:43:17.856 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T14:43:17.862 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T14:43:17.863 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T14:43:17.896 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T14:43:17.901 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T14:43:17.902 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T14:43:17.922 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T14:43:17.928 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T14:43:17.929 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T14:43:17.947 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-10T14:43:17.953 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T14:43:17.954 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T14:43:17.976 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T14:43:17.982 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T14:43:17.983 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T14:43:18.002 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T14:43:18.008 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T14:43:18.009 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T14:43:18.026 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-10T14:43:18.032 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T14:43:18.033 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T14:43:18.055 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T14:43:18.061 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T14:43:18.064 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T14:43:18.085 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T14:43:18.091 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T14:43:18.092 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T14:43:18.110 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-10T14:43:18.115 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T14:43:18.116 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T14:43:18.154 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T14:43:18.160 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T14:43:18.161 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T14:43:18.180 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T14:43:18.187 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T14:43:18.189 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T14:43:18.435 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T14:43:18.441 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T14:43:18.442 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T14:43:18.462 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T14:43:18.468 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T14:43:18.470 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T14:43:18.507 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T14:43:18.513 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T14:43:18.514 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T14:43:18.539 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T14:43:18.545 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:18.547 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:18.595 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T14:43:18.601 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:18.603 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:18.626 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T14:43:18.633 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:18.634 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:18.671 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T14:43:18.678 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:18.680 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:18.824 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T14:43:18.831 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T14:43:18.847 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T14:43:18.870 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T14:43:18.877 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:18.878 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.237 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-10T14:43:19.243 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:19.244 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.276 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T14:43:19.283 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:19.284 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.324 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T14:43:19.329 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:19.331 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.387 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-10T14:43:19.392 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:19.393 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.424 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T14:43:19.429 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T14:43:19.431 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:19.461 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T14:43:19.467 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:19.468 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.498 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T14:43:19.503 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T14:43:19.510 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T14:43:19.530 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-10T14:43:19.537 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T14:43:19.539 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T14:43:19.565 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T14:43:19.571 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:19.572 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:19.948 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T14:43:19.954 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T14:43:19.955 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T14:43:20.021 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T14:43:20.026 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T14:43:20.027 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T14:43:20.066 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T14:43:20.073 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T14:43:20.074 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T14:43:20.094 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T14:43:20.101 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T14:43:20.102 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T14:43:20.249 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T14:43:20.256 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:20.257 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:20.562 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T14:43:20.568 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T14:43:20.569 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T14:43:20.584 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T14:43:20.589 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T14:43:20.590 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T14:43:20.609 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T14:43:20.616 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T14:43:20.618 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T14:43:20.644 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T14:43:20.650 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T14:43:20.651 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T14:43:20.673 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T14:43:20.680 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T14:43:20.681 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T14:43:20.711 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T14:43:20.717 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T14:43:20.749 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T14:43:20.951 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T14:43:20.958 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:20.967 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:20.987 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T14:43:20.993 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T14:43:20.994 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T14:43:21.025 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T14:43:21.031 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T14:43:21.032 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:21.049 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-10T14:43:21.056 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T14:43:21.057 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:21.074 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-10T14:43:21.081 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T14:43:21.082 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T14:43:21.111 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T14:43:21.118 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T14:43:21.118 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T14:43:21.174 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-10T14:43:21.180 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:21.184 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:22.089 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T14:43:22.095 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:22.097 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:22.132 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T14:43:22.139 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:22.140 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:22.387 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T14:43:22.393 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T14:43:22.394 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T14:43:22.422 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T14:43:22.427 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T14:43:22.428 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T14:43:22.449 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T14:43:22.455 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T14:43:22.456 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T14:43:22.501 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-10T14:43:22.507 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T14:43:22.508 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T14:43:22.528 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T14:43:22.534 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T14:43:22.535 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:22.588 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T14:43:22.594 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T14:43:22.596 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T14:43:22.615 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T14:43:22.623 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T14:43:22.624 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T14:43:22.648 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T14:43:22.655 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T14:43:22.657 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T14:43:22.679 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T14:43:22.686 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T14:43:22.688 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T14:43:22.713 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-10T14:43:22.719 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T14:43:22.720 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T14:43:22.752 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T14:43:22.757 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T14:43:22.759 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T14:43:22.911 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T14:43:22.919 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T14:43:22.958 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T14:43:23.039 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-10T14:43:23.046 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T14:43:23.047 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T14:43:23.073 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T14:43:23.080 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T14:43:23.081 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T14:43:23.117 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T14:43:23.124 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T14:43:23.140 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T14:43:23.181 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T14:43:23.188 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T14:43:23.189 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T14:43:23.311 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-10T14:43:23.319 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:23.319 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:23.549 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T14:43:23.556 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:23.556 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:23.577 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-10T14:43:23.584 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T14:43:23.592 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T14:43:23.643 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T14:43:23.895 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T14:43:23.896 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T14:43:24.272 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T14:43:24.349 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T14:43:24.352 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T14:43:24.417 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T14:43:24.678 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T14:43:25.030 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T14:43:25.036 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T14:43:25.038 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:25.079 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-10T14:43:25.089 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T14:43:25.169 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T14:43:25.235 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:25.238 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T14:43:25.307 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T14:43:25.377 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T14:43:25.379 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T14:43:25.484 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T14:43:25.677 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T14:43:26.002 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T14:43:26.031 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T14:43:26.110 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T14:43:26.187 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:26.267 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T14:43:26.269 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T14:43:26.272 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T14:43:26.274 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T14:43:26.277 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T14:43:26.279 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T14:43:26.284 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T14:43:26.286 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T14:43:26.288 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T14:43:26.290 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T14:43:26.434 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T14:43:26.508 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T14:43:26.591 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T14:43:26.677 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T14:43:26.679 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T14:43:26.979 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T14:43:27.056 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T14:43:27.061 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T14:43:27.061 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T14:43:27.191 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:27.546 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T14:43:27.689 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T14:43:27.782 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T14:43:27.912 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T14:43:27.983 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T14:43:27.984 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:28.098 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T14:43:28.107 INFO:teuthology.orchestra.run.vm00.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T14:43:28.184 INFO:teuthology.orchestra.run.vm00.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T14:43:28.199 INFO:teuthology.orchestra.run.vm00.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T14:43:28.702 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T14:43:28.731 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:28.736 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T14:43:28.812 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T14:43:28.815 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T14:43:28.817 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T14:43:28.947 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T14:43:29.126 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:29.129 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T14:43:29.208 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T14:43:29.275 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T14:43:29.350 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T14:43:29.428 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T14:43:29.497 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T14:43:29.579 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T14:43:29.581 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T14:43:29.667 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T14:43:29.671 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T14:43:29.752 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T14:43:29.856 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T14:43:29.968 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T14:43:30.034 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T14:43:30.036 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T14:43:30.038 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:30.040 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T14:43:30.179 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T14:43:30.269 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T14:43:30.280 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T14:43:30.364 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:30.366 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T14:43:30.451 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:30.453 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T14:43:30.532 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T14:43:30.676 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T14:43:30.762 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T14:43:30.882 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T14:43:30.884 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:30.887 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:30.889 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T14:43:31.512 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T14:43:31.519 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:31.521 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:31.523 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:31.526 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:31.528 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:31.593 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T14:43:31.593 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T14:43:32.194 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.197 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.200 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.203 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.206 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.208 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.210 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:32.250 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-10T14:43:32.294 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-10T14:43:32.302 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-10T14:43:32.307 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T14:43:32.372 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T14:43:32.637 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T14:43:33.070 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:33.074 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:33.323 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T14:43:33.323 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T14:43:33.720 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:33.814 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T14:43:34.208 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:34.240 INFO:teuthology.orchestra.run.vm00.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T14:43:34.273 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T14:43:34.273 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T14:43:34.626 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 178 MB in 31s (5713 kB/s) 2026-03-10T14:43:34.744 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:34.748 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T14:43:34.786 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T14:43:34.788 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T14:43:34.790 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T14:43:34.814 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T14:43:34.814 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T14:43:34.815 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T14:43:34.821 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T14:43:34.822 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T14:43:34.839 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T14:43:34.845 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T14:43:34.847 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T14:43:34.874 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T14:43:34.882 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:34.926 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:35.141 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T14:43:35.146 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:35.147 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:35.164 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.168 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T14:43:35.174 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T14:43:35.175 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:35.201 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T14:43:35.207 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T14:43:35.208 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T14:43:35.233 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.236 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T14:43:35.246 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T14:43:35.246 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T14:43:35.325 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.636 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.639 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.639 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T14:43:35.655 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.717 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libnbd0. 2026-03-10T14:43:35.719 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T14:43:35.719 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T14:43:35.722 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T14:43:35.723 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T14:43:35.745 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T14:43:35.751 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.752 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.783 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rados. 2026-03-10T14:43:35.789 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.790 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.815 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T14:43:35.820 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:35.821 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.835 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T14:43:35.840 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.841 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.860 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T14:43:35.865 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:35.866 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.892 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T14:43:35.897 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T14:43:35.898 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T14:43:35.919 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T14:43:35.926 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T14:43:35.927 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T14:43:35.945 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T14:43:35.951 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:35.952 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:35.976 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T14:43:35.982 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T14:43:35.983 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T14:43:36.007 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T14:43:36.012 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T14:43:36.013 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T14:43:36.031 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T14:43:36.037 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T14:43:36.038 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T14:43:36.059 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua5.1. 2026-03-10T14:43:36.064 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T14:43:36.065 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T14:43:36.084 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-any. 2026-03-10T14:43:36.090 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T14:43:36.091 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T14:43:36.098 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.104 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package zip. 2026-03-10T14:43:36.108 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T14:43:36.109 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T14:43:36.113 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.117 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.128 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package unzip. 2026-03-10T14:43:36.132 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.134 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T14:43:36.135 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T14:43:36.174 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package luarocks. 2026-03-10T14:43:36.179 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T14:43:36.180 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T14:43:36.229 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librgw2. 2026-03-10T14:43:36.234 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:36.235 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.263 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T14:43:36.272 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T14:43:36.291 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T14:43:36.372 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T14:43:36.380 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:36.456 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.494 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T14:43:36.497 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T14:43:36.504 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T14:43:36.505 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T14:43:36.523 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T14:43:36.530 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:36.531 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.557 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-common. 2026-03-10T14:43:36.563 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:36.564 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:36.883 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.883 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T14:43:36.883 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.883 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T14:43:36.889 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:36.892 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T14:43:36.987 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-base. 2026-03-10T14:43:36.993 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:36.997 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:37.109 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T14:43:37.115 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T14:43:37.116 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T14:43:37.133 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T14:43:37.138 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T14:43:37.139 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T14:43:37.172 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T14:43:37.178 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T14:43:37.179 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T14:43:37.196 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T14:43:37.200 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T14:43:37.201 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T14:43:37.216 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T14:43:37.222 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T14:43:37.223 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T14:43:37.237 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T14:43:37.242 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T14:43:37.244 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T14:43:37.258 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-portend. 2026-03-10T14:43:37.263 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T14:43:37.264 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T14:43:37.278 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T14:43:37.284 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T14:43:37.284 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T14:43:37.299 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T14:43:37.304 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T14:43:37.305 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T14:43:37.340 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T14:43:37.348 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T14:43:37.374 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T14:43:37.402 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T14:43:37.408 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T14:43:37.409 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T14:43:37.425 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-mako. 2026-03-10T14:43:37.431 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T14:43:37.432 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T14:43:37.455 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T14:43:37.461 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T14:43:37.462 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T14:43:37.479 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T14:43:37.485 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T14:43:37.486 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T14:43:37.504 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webob. 2026-03-10T14:43:37.511 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T14:43:37.512 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T14:43:37.533 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T14:43:37.538 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T14:43:37.540 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T14:43:37.557 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T14:43:37.563 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T14:43:37.563 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T14:43:37.579 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-paste. 2026-03-10T14:43:37.586 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T14:43:37.586 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T14:43:37.624 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T14:43:37.629 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T14:43:37.630 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T14:43:37.645 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T14:43:37.650 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T14:43:37.651 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T14:43:37.668 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T14:43:37.673 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T14:43:37.674 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T14:43:37.691 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T14:43:37.697 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T14:43:37.698 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T14:43:37.779 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T14:43:37.786 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T14:43:37.787 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T14:43:37.818 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T14:43:37.825 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:37.826 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:37.959 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T14:43:37.965 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:37.967 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:37.985 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T14:43:37.991 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:37.993 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.026 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T14:43:38.033 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.034 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.078 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T14:43:38.081 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-pytest python3-xmltodict python3-jmespath 2026-03-10T14:43:38.144 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T14:43:38.150 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T14:43:38.162 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T14:43:38.168 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T14:43:38.191 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T14:43:38.197 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.198 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.414 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T14:43:38.414 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T14:43:38.570 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph. 2026-03-10T14:43:38.577 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.578 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.596 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T14:43:38.598 INFO:teuthology.orchestra.run.vm03.stdout:python3-pytest is already the newest version (6.2.5-1ubuntu2). 2026-03-10T14:43:38.598 INFO:teuthology.orchestra.run.vm03.stdout:python3-pytest set to manually installed. 2026-03-10T14:43:38.598 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T14:43:38.599 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T14:43:38.599 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T14:43:38.599 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T14:43:38.602 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.611 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T14:43:38.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict 2026-03-10T14:43:38.657 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.764 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T14:43:38.771 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.772 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.813 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 2 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T14:43:38.814 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 34.3 kB of archives. 2026-03-10T14:43:38.814 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T14:43:38.814 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T14:43:38.857 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package cephadm. 2026-03-10T14:43:38.857 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:38.858 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.882 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T14:43:38.890 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T14:43:38.890 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:38.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T14:43:38.923 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T14:43:38.929 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:38.930 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:38.959 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T14:43:38.966 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T14:43:38.967 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T14:43:38.985 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-routes. 2026-03-10T14:43:38.992 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T14:43:38.993 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T14:43:39.021 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T14:43:39.029 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:39.030 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:39.136 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 34.3 kB in 0s (122 kB/s) 2026-03-10T14:43:39.637 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T14:43:39.663 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T14:43:39.668 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T14:43:39.669 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T14:43:39.676 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T14:43:39.679 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T14:43:39.681 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T14:43:39.702 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T14:43:39.706 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T14:43:39.723 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T14:43:39.741 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T14:43:39.745 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T14:43:39.747 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T14:43:39.755 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T14:43:39.786 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T14:43:39.793 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T14:43:39.794 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T14:43:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T14:43:39.820 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T14:43:39.821 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T14:43:39.835 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T14:43:39.995 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T14:43:40.002 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:40.004 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:40.233 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.233 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T14:43:40.233 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.233 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T14:43:40.240 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:43:40.243 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T14:43:40.319 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T14:43:40.325 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T14:43:40.326 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T14:43:40.344 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T14:43:40.350 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T14:43:40.352 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T14:43:40.381 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T14:43:40.387 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T14:43:40.419 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T14:43:40.448 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T14:43:40.454 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T14:43:40.455 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T14:43:40.476 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T14:43:40.478 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T14:43:40.479 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T14:43:40.502 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T14:43:40.508 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T14:43:40.523 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T14:43:40.693 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T14:43:40.699 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:40.701 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:40.717 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T14:43:40.724 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T14:43:40.725 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T14:43:40.746 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T14:43:40.753 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T14:43:40.754 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:40.771 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package jq. 2026-03-10T14:43:40.777 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T14:43:40.778 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:40.795 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package socat. 2026-03-10T14:43:40.802 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T14:43:40.804 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T14:43:40.831 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T14:43:40.838 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T14:43:40.839 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T14:43:40.889 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-test. 2026-03-10T14:43:40.896 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:40.897 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:41.331 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T14:43:41.336 DEBUG:teuthology.parallel:result is None 2026-03-10T14:43:41.913 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T14:43:41.919 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T14:43:41.921 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:41.956 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T14:43:41.960 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:41.961 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:41.984 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T14:43:41.990 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T14:43:41.991 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T14:43:42.020 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T14:43:42.023 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T14:43:42.024 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T14:43:42.073 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T14:43:42.079 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T14:43:42.079 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T14:43:42.129 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package pkg-config. 2026-03-10T14:43:42.136 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T14:43:42.138 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T14:43:42.230 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T14:43:42.238 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T14:43:42.338 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:42.429 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T14:43:42.432 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T14:43:42.433 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T14:43:42.453 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T14:43:42.457 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T14:43:42.458 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T14:43:42.485 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T14:43:42.488 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T14:43:42.489 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T14:43:42.512 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T14:43:42.515 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T14:43:42.516 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T14:43:42.540 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-py. 2026-03-10T14:43:42.546 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T14:43:42.546 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T14:43:42.572 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T14:43:42.579 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T14:43:42.580 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T14:43:42.644 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T14:43:42.649 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T14:43:42.650 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T14:43:42.668 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-toml. 2026-03-10T14:43:42.674 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T14:43:42.675 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T14:43:42.696 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T14:43:42.700 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T14:43:42.701 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T14:43:42.732 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T14:43:42.737 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T14:43:42.738 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T14:43:42.764 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T14:43:42.768 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T14:43:42.770 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T14:43:42.893 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package radosgw. 2026-03-10T14:43:42.898 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:42.899 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:43.285 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T14:43:43.285 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T14:43:43.287 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:43.308 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package smartmontools. 2026-03-10T14:43:43.310 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T14:43:43.318 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T14:43:43.362 INFO:teuthology.orchestra.run.vm00.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T14:43:43.614 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T14:43:43.614 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T14:43:43.982 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T14:43:44.048 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T14:43:44.051 INFO:teuthology.orchestra.run.vm00.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T14:43:44.116 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T14:43:44.379 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T14:43:44.735 INFO:teuthology.orchestra.run.vm00.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T14:43:44.743 INFO:teuthology.orchestra.run.vm00.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T14:43:44.744 INFO:teuthology.orchestra.run.vm00.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:44.793 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user cephadm....done 2026-03-10T14:43:44.803 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T14:43:44.881 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T14:43:44.956 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:44.959 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T14:43:45.028 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T14:43:45.105 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T14:43:45.108 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T14:43:45.209 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T14:43:45.355 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T14:43:45.435 INFO:teuthology.orchestra.run.vm00.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T14:43:45.444 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T14:43:45.522 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T14:43:45.601 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:45.689 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T14:43:45.691 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T14:43:45.693 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T14:43:45.696 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T14:43:45.698 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T14:43:45.700 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T14:43:45.705 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T14:43:45.707 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T14:43:45.708 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T14:43:45.711 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T14:43:45.851 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T14:43:45.933 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T14:43:46.014 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T14:43:46.100 INFO:teuthology.orchestra.run.vm00.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T14:43:46.102 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T14:43:46.400 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T14:43:46.488 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T14:43:46.510 INFO:teuthology.orchestra.run.vm00.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T14:43:46.512 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T14:43:46.613 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T14:43:46.758 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T14:43:46.901 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T14:43:47.041 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T14:43:47.194 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T14:43:47.300 INFO:teuthology.orchestra.run.vm00.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T14:43:47.315 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:47.553 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T14:43:48.263 INFO:teuthology.orchestra.run.vm00.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T14:43:48.285 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:48.289 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T14:43:48.367 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T14:43:48.369 INFO:teuthology.orchestra.run.vm00.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T14:43:48.372 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T14:43:48.441 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T14:43:48.507 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:48.510 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T14:43:48.591 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T14:43:48.661 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T14:43:48.731 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T14:43:48.811 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T14:43:48.883 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T14:43:48.957 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T14:43:48.960 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T14:43:49.044 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T14:43:49.047 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T14:43:49.123 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T14:43:49.209 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T14:43:49.303 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T14:43:49.374 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T14:43:49.376 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T14:43:49.378 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:49.381 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T14:43:49.531 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T14:43:49.606 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T14:43:49.608 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T14:43:49.677 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T14:43:49.682 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T14:43:49.768 INFO:teuthology.orchestra.run.vm00.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T14:43:49.770 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T14:43:49.850 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T14:43:49.988 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T14:43:50.078 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T14:43:50.345 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T14:43:50.348 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.350 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.353 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T14:43:50.966 INFO:teuthology.orchestra.run.vm00.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T14:43:50.974 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.976 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.979 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.981 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:50.984 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.057 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T14:43:51.057 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T14:43:51.455 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.458 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.460 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.463 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.466 INFO:teuthology.orchestra.run.vm00.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.468 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.471 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.474 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:51.509 INFO:teuthology.orchestra.run.vm00.stdout:Adding group ceph....done 2026-03-10T14:43:51.550 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user ceph....done 2026-03-10T14:43:51.560 INFO:teuthology.orchestra.run.vm00.stdout:Setting system user ceph properties....done 2026-03-10T14:43:51.565 INFO:teuthology.orchestra.run.vm00.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T14:43:51.633 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T14:43:51.893 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T14:43:52.287 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:52.291 INFO:teuthology.orchestra.run.vm00.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:52.527 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T14:43:52.527 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T14:43:52.889 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:52.978 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T14:43:53.367 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:53.434 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T14:43:53.434 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T14:43:53.868 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:53.929 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T14:43:53.929 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T14:43:54.312 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:54.395 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T14:43:54.395 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T14:43:54.751 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:54.754 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:54.800 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:54.873 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T14:43:54.873 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T14:43:55.248 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:55.261 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:55.264 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:55.276 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T14:43:55.400 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T14:43:55.409 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T14:43:55.425 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T14:43:55.528 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T14:43:55.980 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.981 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T14:43:55.981 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.981 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T14:43:55.988 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T14:43:55.991 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:55.992 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T14:43:56.945 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T14:43:56.948 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-pytest python3-xmltodict python3-jmespath 2026-03-10T14:43:57.024 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T14:43:57.241 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T14:43:57.241 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T14:43:57.462 INFO:teuthology.orchestra.run.vm00.stdout:python3-pytest is already the newest version (6.2.5-1ubuntu2). 2026-03-10T14:43:57.462 INFO:teuthology.orchestra.run.vm00.stdout:python3-pytest set to manually installed. 2026-03-10T14:43:57.462 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T14:43:57.462 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T14:43:57.463 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T14:43:57.463 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T14:43:57.483 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T14:43:57.483 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath python3-xmltodict 2026-03-10T14:43:57.572 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 2 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T14:43:57.572 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 34.3 kB of archives. 2026-03-10T14:43:57.572 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T14:43:57.572 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T14:43:57.589 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T14:43:57.821 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 34.3 kB in 0s (344 kB/s) 2026-03-10T14:43:57.836 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T14:43:57.874 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T14:43:57.877 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T14:43:57.878 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T14:43:57.896 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T14:43:57.902 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T14:43:57.903 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T14:43:57.933 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T14:43:58.009 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T14:43:58.368 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.368 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T14:43:58.368 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.368 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T14:43:58.375 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T14:43:58.378 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:43:58.379 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T14:43:59.273 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T14:43:59.278 DEBUG:teuthology.parallel:result is None 2026-03-10T14:43:59.278 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:43:59.929 DEBUG:teuthology.orchestra.run.vm00:> dpkg-query -W -f '${Version}' ceph 2026-03-10T14:43:59.939 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:59.939 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:43:59.940 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T14:43:59.941 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:44:00.517 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-10T14:44:00.528 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:44:00.528 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T14:44:00.528 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T14:44:00.529 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T14:44:00.529 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:44:00.529 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T14:44:00.537 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:44:00.537 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T14:44:00.578 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T14:44:00.578 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:44:00.578 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T14:44:00.588 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T14:44:00.637 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:44:00.637 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T14:44:00.645 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T14:44:00.693 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T14:44:00.693 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:44:00.693 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T14:44:00.701 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T14:44:00.752 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:44:00.752 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T14:44:00.761 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T14:44:00.810 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T14:44:00.811 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:44:00.811 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T14:44:00.819 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T14:44:00.868 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:44:00.868 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T14:44:00.876 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T14:44:00.925 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'but it is still running', 'overall HEALTH_', '\\(OSDMAP_FLAGS\\)', '\\(PG_', '\\(OSD_', '\\(OBJECT_', '\\(POOL_APP_NOT_ENABLED\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Cluster fsid is 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.c': '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]', 'mon.b': '192.168.123.103'} 2026-03-10T14:44:00.974 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-10T14:44:00.974 INFO:tasks.cephadm:First mgr is y 2026-03-10T14:44:00.974 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T14:44:00.974 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T14:44:00.983 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-10T14:44:00.992 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T14:44:00.992 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:44:01.566 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T14:44:02.187 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:44:02.188 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T14:44:02.188 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T14:44:02.188 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:03.550 INFO:teuthology.orchestra.run.vm00.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 14:44 /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:03.552 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:04.893 INFO:teuthology.orchestra.run.vm03.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 14:44 /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:04.893 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:04.897 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T14:44:04.907 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T14:44:04.907 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T14:44:04.942 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T14:44:05.058 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:44:05.061 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T14:45:27.901 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-10T14:45:27.905 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-10T14:45:27.916 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T14:45:27.923 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-10T14:45:27.931 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T14:45:27.972 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-10T14:45:27.981 INFO:tasks.cephadm:Writing seed config... 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T14:45:27.982 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T14:45:27.982 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:45:27.982 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T14:45:28.016 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 93bd26bc-1c8f-11f1-8404-610ce866bde7 mon election default strategy = 3 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T14:45:28.016 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service 2026-03-10T14:45:28.058 DEBUG:teuthology.orchestra.run.vm00:mgr.y> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y.service 2026-03-10T14:45:28.102 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T14:45:28.102 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '93bd26bc-1c8f-11f1-8404-610ce866bde7', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T14:45:28.241 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T14:45:28.244 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T14:45:28.244 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T14:45:28.246 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T14:45:28.247 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.249 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T14:45:28.249 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T14:45:28.251 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T14:45:28.251 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.253 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T14:45:28.253 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T14:45:28.255 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T14:45:28.256 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.258 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T14:45:28.258 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T14:45:28.260 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T14:45:28.260 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.262 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:docker (/usr/bin/docker) is present 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T14:45:28.265 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T14:45:28.268 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T14:45:28.268 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T14:45:28.270 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T14:45:28.270 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.273 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T14:45:28.273 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T14:45:28.275 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T14:45:28.275 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.277 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T14:45:28.277 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T14:45:28.280 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T14:45:28.280 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.282 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T14:45:28.282 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T14:45:28.284 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T14:45:28.284 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:45:28.287 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 140715822385712 on /run/cephadm/93bd26bc-1c8f-11f1-8404-610ce866bde7.lock 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Lock 140715822385712 acquired on /run/cephadm/93bd26bc-1c8f-11f1-8404-610ce866bde7.lock 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T14:45:28.289 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:45:28.291 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-10T14:45:28.325 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T14:45:28.325 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T14:45:28.326 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:45:29.379 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T14:45:29.379 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:45:29.379 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:45:29.379 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:45:30.069 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T14:45:30.069 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T14:45:30.069 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T14:45:30.200 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T14:45:30.200 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T14:45:30.311 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCKLrBp3CrQEBAAP2oVs9xMDBC/BK/+E9OGmQ== 2026-03-10T14:45:30.431 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCKLrBpzJyNFxAACPn+uuypM1pmMTmcF5vGeQ== 2026-03-10T14:45:30.556 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCKLrBplf0NHxAADmGwUV0uhzFL/VoQKrXOfA== 2026-03-10T14:45:30.557 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:30.768 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 1 imported monmap: 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 0 /usr/bin/ceph-mon: set fsid to 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Git sha 0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: DB SUMMARY 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: DB Session ID: JI0RRB3H7NVCGSH0KRHV 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.env: 0x558628e57dc0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.info_log: 0x55862ed6cda0 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T14:45:30.917 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.db_log_dir: 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.wal_dir: 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.write_buffer_manager: 0x55862ed635e0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.row_cache: None 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.wal_filter: None 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.859+0000 7fda67267d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T14:45:30.918 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Compression algorithms supported: 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kZSTD supported: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.merge_operator: 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55862ed5f520) 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55862ed85350 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T14:45:30.919 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.num_levels: 7 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T14:45:30.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.863+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T14:45:30.921 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e68fb4df-8115-4ebb-8eb4-f38830c686ec 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.867+0000 7fda67267d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda67267d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55862ed86e00 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda67267d80 4 rocksdb: DB pointer 0x55862ee6a000 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda5e9f1640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda5e9f1640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T14:45:30.922 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55862ed85350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda67267d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda67267d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T14:45:30.871+0000 7fda67267d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T14:45:30.923 INFO:teuthology.orchestra.run.vm00.stdout:create mon.a on 2026-03-10T14:45:31.112 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-10T14:45:31.308 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T14:45:31.479 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target → /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target. 2026-03-10T14:45:31.479 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target → /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target. 2026-03-10T14:45:31.660 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a 2026-03-10T14:45:31.660 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service: Unit ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service not loaded. 2026-03-10T14:45:31.830 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target.wants/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service → /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service. 2026-03-10T14:45:31.841 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:45:31.841 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T14:45:31.841 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T14:45:31.841 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T14:45:31.869 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:31 vm00 systemd[1]: Started Ceph mon.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.0685378s) 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T14:45:32.386 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T14:45:32.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:32 vm00 bash[20252]: cluster 2026-03-10T14:45:32.261749+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:32.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:32 vm00 bash[20252]: cluster 2026-03-10T14:45:32.261749+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:32.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:32 vm00 bash[20252]: cluster 2026-03-10T14:45:32.255854+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T14:45:32.443 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:32 vm00 bash[20252]: cluster 2026-03-10T14:45:32.255854+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T14:45:32.847 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T14:45:32.848 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T14:45:33.093 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T14:45:33.135 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: Stopping Ceph mon.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:45:33.214 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20252]: debug 2026-03-10T14:45:33.127+0000 7fa6c532d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20252]: debug 2026-03-10T14:45:33.127+0000 7fa6c532d640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20641]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-mon-a 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service: Deactivated successfully. 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: Stopped Ceph mon.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: Started Ceph mon.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.331+0000 7f1f6b44fd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.331+0000 7f1f6b44fd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.331+0000 7f1f6b44fd80 0 pidfile_write: ignore empty --pid-file 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 0 load: jerasure load: lrc 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Git sha 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: DB SUMMARY 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: DB Session ID: YKLVMDWO84EGGKOEFHTU 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 76789 ; 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.env: 0x5573726e3dc0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.info_log: 0x5573990d8d00 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.statistics: (nil) 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.use_fsync: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.db_log_dir: 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.wal_dir: 2026-03-10T14:45:33.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.write_buffer_manager: 0x5573990dd900 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.unordered_write: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.row_cache: None 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.wal_filter: None 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.wal_compression: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_open_files: -1 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T14:45:33.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Compression algorithms supported: 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kZSTD supported: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.merge_operator: 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_filter: None 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5573990d8480) 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cache_index_and_filter_blocks: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: pin_top_level_index_and_filter: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: index_type: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: data_block_index_type: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: index_shortening: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: checksum: 4 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: no_block_cache: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_cache: 0x5573990ff350 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_cache_name: BinnedLRUCache 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_cache_options: 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: capacity : 536870912 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: num_shard_bits : 4 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: strict_capacity_limit : 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: high_pri_pool_ratio: 0.000 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_cache_compressed: (nil) 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: persistent_cache: (nil) 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_size: 4096 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_size_deviation: 10 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_restart_interval: 16 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: index_block_restart_interval: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: metadata_block_size: 4096 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: partition_filters: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: use_delta_encoding: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: filter_policy: bloomfilter 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: whole_key_filtering: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: verify_compression: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: read_amp_bytes_per_bit: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: format_version: 5 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: enable_index_compression: 1 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: block_align: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: max_auto_readahead_size: 262144 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: prepopulate_block_cache: 0 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: initial_auto_readahead_size: 8192 2026-03-10T14:45:33.416 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: num_file_reads_for_auto_readahead: 2 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression: NoCompression 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.num_levels: 7 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T14:45:33.417 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.ttl: 2592000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.335+0000 7f1f6b44fd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e68fb4df-8115-4ebb-8eb4-f38830c686ec 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773153933342916, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773153933345042, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 73643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 231, "table_properties": {"data_size": 71922, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 10026, "raw_average_key_size": 49, "raw_value_size": 66337, "raw_average_value_size": 328, "num_data_blocks": 8, "num_entries": 202, "num_filter_entries": 202, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773153933, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e68fb4df-8115-4ebb-8eb4-f38830c686ec", "db_session_id": "YKLVMDWO84EGGKOEFHTU", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773153933345112, "job": 1, "event": "recovery_finished"} 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.339+0000 7f1f6b44fd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.343+0000 7f1f6b44fd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.343+0000 7f1f6b44fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557399100e00 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.343+0000 7f1f6b44fd80 4 rocksdb: DB pointer 0x55739920c000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.343+0000 7f1f6b44fd80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 1 mon.a@-1(???) e1 preinit fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).mds e1 new map 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).mds e1 print_map 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: e1 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: btime 2026-03-10T14:45:32:261204+0000 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: legacy client fscid: -1 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: No filesystems configured 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:45:33.418 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: debug 2026-03-10T14:45:33.347+0000 7f1f6b44fd80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T14:45:33.490 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T14:45:33.491 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:45:33.491 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T14:45:33.491 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T14:45:33.491 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355653+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355653+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355697+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355697+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355701+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355701+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355704+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355704+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355713+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355713+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355716+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355716+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355720+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355720+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355723+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355723+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355984+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.355984+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.356000+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.356000+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.356552+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 bash[20726]: cluster 2026-03-10T14:45:33.356552+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T14:45:33.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.696 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y 2026-03-10T14:45:33.697 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y.service: Unit ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y.service not loaded. 2026-03-10T14:45:33.903 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7.target.wants/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y.service → /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service. 2026-03-10T14:45:33.914 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:45:33.915 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T14:45:33.915 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:45:33.915 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T14:45:33.915 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T14:45:33.915 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T14:45:33.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:33.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: Started Ceph mgr.y for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:45:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:33 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "93bd26bc-1c8f-11f1-8404-610ce866bde7", 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:45:34.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:45:34.251 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:45:32:261204+0000", 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:45:34.252 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:45:32.262024+0000", 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:34.253 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T14:45:34.367 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:34 vm00 bash[21005]: debug 2026-03-10T14:45:34.175+0000 7f03bf07b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:45:34.367 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:34 vm00 bash[21005]: debug 2026-03-10T14:45:34.219+0000 7f03bf07b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:34.367 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:34 vm00 bash[21005]: debug 2026-03-10T14:45:34.359+0000 7f03bf07b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:45:34.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:34 vm00 bash[20726]: audit 2026-03-10T14:45:33.442223+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2493608604' entity='client.admin' 2026-03-10T14:45:34.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:34 vm00 bash[20726]: audit 2026-03-10T14:45:33.442223+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2493608604' entity='client.admin' 2026-03-10T14:45:34.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:34 vm00 bash[20726]: audit 2026-03-10T14:45:34.193388+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/1971401441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:34.704 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:34 vm00 bash[20726]: audit 2026-03-10T14:45:34.193388+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/1971401441' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:34.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:34 vm00 bash[21005]: debug 2026-03-10T14:45:34.695+0000 7f03bf07b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:45:35.591 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.211+0000 7f03bf07b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:45:35.591 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.303+0000 7f03bf07b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:45:35.592 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:45:35.592 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:45:35.592 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: from numpy import show_config as show_numpy_config 2026-03-10T14:45:35.592 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.431+0000 7f03bf07b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:45:35.592 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.583+0000 7f03bf07b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:45:35.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.627+0000 7f03bf07b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:45:35.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.671+0000 7f03bf07b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:45:35.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.715+0000 7f03bf07b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:45:35.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:35 vm00 bash[21005]: debug 2026-03-10T14:45:35.775+0000 7f03bf07b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "93bd26bc-1c8f-11f1-8404-610ce866bde7", 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:45:36.522 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:45:32:261204+0000", 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:45:32.262024+0000", 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:36.523 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T14:45:36.585 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:36 vm00 bash[20726]: audit 2026-03-10T14:45:36.457926+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/4138649353' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:36.585 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:36 vm00 bash[20726]: audit 2026-03-10T14:45:36.457926+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/4138649353' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:36.585 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.275+0000 7f03bf07b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:45:36.586 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.323+0000 7f03bf07b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:45:36.586 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.379+0000 7f03bf07b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:45:36.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.579+0000 7f03bf07b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:45:36.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.623+0000 7f03bf07b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:45:36.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.667+0000 7f03bf07b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:45:36.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.803+0000 7f03bf07b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:37.266 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:36 vm00 bash[21005]: debug 2026-03-10T14:45:36.991+0000 7f03bf07b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:45:37.266 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:37 vm00 bash[21005]: debug 2026-03-10T14:45:37.183+0000 7f03bf07b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:45:37.266 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:37 vm00 bash[21005]: debug 2026-03-10T14:45:37.223+0000 7f03bf07b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:45:37.720 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:37 vm00 bash[21005]: debug 2026-03-10T14:45:37.275+0000 7f03bf07b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:45:37.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:37 vm00 bash[21005]: debug 2026-03-10T14:45:37.459+0000 7f03bf07b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:38.221 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:37 vm00 bash[21005]: debug 2026-03-10T14:45:37.823+0000 7f03bf07b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:45:38.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:37 vm00 bash[20726]: cluster 2026-03-10T14:45:37.829939+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:38.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:37 vm00 bash[20726]: cluster 2026-03-10T14:45:37.829939+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "93bd26bc-1c8f-11f1-8404-610ce866bde7", 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:45:38.825 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:45:32:261204+0000", 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:45:32.262024+0000", 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:38.826 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (3/15)... 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: cluster 2026-03-10T14:45:37.899005+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.0691116s) 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: cluster 2026-03-10T14:45:37.899005+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.0691116s) 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.907926+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.907926+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908073+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908073+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908214+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908214+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908332+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908332+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908442+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.908442+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: cluster 2026-03-10T14:45:37.915932+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: cluster 2026-03-10T14:45:37.915932+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.932538+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.932538+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.933894+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.933894+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.935797+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.935797+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.939479+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.939479+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.944211+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:37.944211+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/2424899720' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:38.781521+0000 mon.a (mon.0) 28 : audit [DBG] from='client.? 192.168.123.100:0/3133102401' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:39.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:38 vm00 bash[20726]: audit 2026-03-10T14:45:38.781521+0000 mon.a (mon.0) 28 : audit [DBG] from='client.? 192.168.123.100:0/3133102401' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:40.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:39 vm00 bash[20726]: cluster 2026-03-10T14:45:38.913208+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e3: y(active, since 1.08334s) 2026-03-10T14:45:40.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:39 vm00 bash[20726]: cluster 2026-03-10T14:45:38.913208+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e3: y(active, since 1.08334s) 2026-03-10T14:45:41.159 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:40 vm00 bash[20726]: cluster 2026-03-10T14:45:39.929440+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T14:45:41.159 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:40 vm00 bash[20726]: cluster 2026-03-10T14:45:39.929440+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T14:45:41.190 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "93bd26bc-1c8f-11f1-8404-610ce866bde7", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 7, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:45:32:261204+0000", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:45:41.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:45:32.262024+0000", 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:41.192 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T14:45:41.493 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T14:45:41.494 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T14:45:42.220 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: ignoring --setuser ceph since I am not root 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: ignoring --setgroup ceph since I am not root 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: debug 2026-03-10T14:45:42.131+0000 7fe3ccbc9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: debug 2026-03-10T14:45:42.171+0000 7fe3ccbc9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.145749+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.100:0/2357266271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.145749+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.100:0/2357266271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.428503+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/500649983' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.428503+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/500649983' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.751569+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/19734054' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T14:45:42.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:41 vm00 bash[20726]: audit 2026-03-10T14:45:41.751569+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/19734054' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T14:45:42.390 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T14:45:42.391 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 5... 2026-03-10T14:45:42.712 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: debug 2026-03-10T14:45:42.315+0000 7fe3ccbc9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:45:42.969 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:42 vm00 bash[21005]: debug 2026-03-10T14:45:42.703+0000 7fe3ccbc9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:45:43.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: audit 2026-03-10T14:45:41.956659+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/19734054' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T14:45:43.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: audit 2026-03-10T14:45:41.956659+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/19734054' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T14:45:43.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: cluster 2026-03-10T14:45:41.959831+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e5: y(active, since 4s) 2026-03-10T14:45:43.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: cluster 2026-03-10T14:45:41.959831+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e5: y(active, since 4s) 2026-03-10T14:45:43.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: audit 2026-03-10T14:45:42.333099+0000 mon.a (mon.0) 36 : audit [DBG] from='client.? 192.168.123.100:0/3061387108' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:45:43.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:42 vm00 bash[20726]: audit 2026-03-10T14:45:42.333099+0000 mon.a (mon.0) 36 : audit [DBG] from='client.? 192.168.123.100:0/3061387108' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.219+0000 7fe3ccbc9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.311+0000 7fe3ccbc9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: from numpy import show_config as show_numpy_config 2026-03-10T14:45:43.599 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.435+0000 7fe3ccbc9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:45:43.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.591+0000 7fe3ccbc9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:45:43.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.631+0000 7fe3ccbc9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:45:43.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.675+0000 7fe3ccbc9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:45:43.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.727+0000 7fe3ccbc9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:45:43.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:43 vm00 bash[21005]: debug 2026-03-10T14:45:43.783+0000 7fe3ccbc9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:44.596 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.291+0000 7fe3ccbc9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:45:44.596 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.335+0000 7fe3ccbc9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:45:44.597 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.375+0000 7fe3ccbc9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:45:44.597 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.539+0000 7fe3ccbc9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:45:44.915 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.587+0000 7fe3ccbc9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:45:44.915 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.627+0000 7fe3ccbc9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:45:44.915 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.739+0000 7fe3ccbc9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:45.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:44 vm00 bash[21005]: debug 2026-03-10T14:45:44.907+0000 7fe3ccbc9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:45:45.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:45 vm00 bash[21005]: debug 2026-03-10T14:45:45.095+0000 7fe3ccbc9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:45:45.195 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:45 vm00 bash[21005]: debug 2026-03-10T14:45:45.139+0000 7fe3ccbc9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:45:45.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:45 vm00 bash[21005]: debug 2026-03-10T14:45:45.187+0000 7fe3ccbc9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:45:45.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:45 vm00 bash[21005]: debug 2026-03-10T14:45:45.375+0000 7fe3ccbc9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:45.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:45 vm00 bash[21005]: debug 2026-03-10T14:45:45.651+0000 7fe3ccbc9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.658891+0000 mon.a (mon.0) 37 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.658891+0000 mon.a (mon.0) 37 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.659169+0000 mon.a (mon.0) 38 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.659169+0000 mon.a (mon.0) 38 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.665523+0000 mon.a (mon.0) 39 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.665523+0000 mon.a (mon.0) 39 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.665757+0000 mon.a (mon.0) 40 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00668889s) 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.665757+0000 mon.a (mon.0) 40 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00668889s) 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.667548+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.667548+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.668377+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.668377+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.669257+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.669257+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.669726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.669726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.670161+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.670161+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.676080+0000 mon.a (mon.0) 46 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: cluster 2026-03-10T14:45:45.676080+0000 mon.a (mon.0) 46 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.686508+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.686508+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.690613+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.690613+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.700178+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.700178+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.705121+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.705121+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.708159+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:45.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:45 vm00 bash[20726]: audit 2026-03-10T14:45:45.708159+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 5 is available 2026-03-10T14:45:46.723 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: cephadm 2026-03-10T14:45:45.683253+0000 mgr.y (mgr.14120) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: cephadm 2026-03-10T14:45:45.683253+0000 mgr.y (mgr.14120) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:45.715516+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:45.715516+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:46.084055+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:46.084055+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:46.087444+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: audit 2026-03-10T14:45:46.087444+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: cluster 2026-03-10T14:45:46.667308+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e7: y(active, since 1.00826s) 2026-03-10T14:45:46.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:46 vm00 bash[20726]: cluster 2026-03-10T14:45:46.667308+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e7: y(active, since 1.00826s) 2026-03-10T14:45:47.316 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T14:45:47.316 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: Generating public/private ed25519 key pair. 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: Your identification has been saved in /tmp/tmpzz_jj7q1/key 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: Your public key has been saved in /tmp/tmpzz_jj7q1/key.pub 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: The key fingerprint is: 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: SHA256:Uwj/6mmdj45bprUFz8CDBWQlBNB/rfxH73JioWeZtRM ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: The key's randomart image is: 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: +--[ED25519 256]--+ 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | .oo+*.. | 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | .+ + | 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | .o o. | 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | .*. . | 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | So*. | 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | oo* oE.| 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | ..+o+o *o| 2026-03-10T14:45:47.893 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | ..Bo+o Xoo| 2026-03-10T14:45:47.894 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: | .*o+..= =o| 2026-03-10T14:45:47.894 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:47 vm00 bash[21005]: +----[SHA256]-----+ 2026-03-10T14:45:48.187 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.584353+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Bus STARTING 2026-03-10T14:45:48.187 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.584353+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Bus STARTING 2026-03-10T14:45:48.187 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.670321+0000 mgr.y (mgr.14120) 3 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.670321+0000 mgr.y (mgr.14120) 3 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.674495+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.674495+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.685884+0000 mgr.y (mgr.14120) 5 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.685884+0000 mgr.y (mgr.14120) 5 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.797672+0000 mgr.y (mgr.14120) 6 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.797672+0000 mgr.y (mgr.14120) 6 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.798051+0000 mgr.y (mgr.14120) 7 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Client ('192.168.123.100', 50206) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.798051+0000 mgr.y (mgr.14120) 7 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Client ('192.168.123.100', 50206) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.798093+0000 mgr.y (mgr.14120) 8 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Bus STARTED 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: cephadm 2026-03-10T14:45:46.798093+0000 mgr.y (mgr.14120) 8 : cephadm [INF] [10/Mar/2026:14:45:46] ENGINE Bus STARTED 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.798475+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.798475+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.969499+0000 mgr.y (mgr.14120) 9 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.969499+0000 mgr.y (mgr.14120) 9 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.973513+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.973513+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.979851+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:46.979851+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:47.592226+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:47.592226+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:47.594213+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.188 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:47 vm00 bash[20726]: audit 2026-03-10T14:45:47.594213+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:48.218 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo837u2M0zBeQnAZnO0vd6ldtZNI+ckiCye1sxcu/i+ ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:48.218 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:45:48.218 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T14:45:48.218 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: audit 2026-03-10T14:45:47.267742+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: audit 2026-03-10T14:45:47.267742+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: audit 2026-03-10T14:45:47.566173+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: audit 2026-03-10T14:45:47.566173+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: cephadm 2026-03-10T14:45:47.566445+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Generating ssh key... 2026-03-10T14:45:49.289 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: cephadm 2026-03-10T14:45:47.566445+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Generating ssh key... 2026-03-10T14:45:49.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: cluster 2026-03-10T14:45:48.600154+0000 mon.a (mon.0) 61 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T14:45:49.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:49 vm00 bash[20726]: cluster 2026-03-10T14:45:48.600154+0000 mon.a (mon.0) 61 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T14:45:50.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: audit 2026-03-10T14:45:48.173556+0000 mgr.y (mgr.14120) 13 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:50.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: audit 2026-03-10T14:45:48.173556+0000 mgr.y (mgr.14120) 13 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:50.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: audit 2026-03-10T14:45:48.464707+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:50.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: audit 2026-03-10T14:45:48.464707+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:50.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: cephadm 2026-03-10T14:45:49.176194+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T14:45:50.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:50 vm00 bash[20726]: cephadm 2026-03-10T14:45:49.176194+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T14:45:50.601 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T14:45:50.601 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mon service... 2026-03-10T14:45:50.972 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T14:45:50.972 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mgr service... 2026-03-10T14:45:51.311 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T14:45:51.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.539521+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.539521+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: cephadm 2026-03-10T14:45:50.539977+0000 mgr.y (mgr.14120) 16 : cephadm [INF] Added host vm00 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: cephadm 2026-03-10T14:45:50.539977+0000 mgr.y (mgr.14120) 16 : cephadm [INF] Added host vm00 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.542846+0000 mon.a (mon.0) 63 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.542846+0000 mon.a (mon.0) 63 : audit [DBG] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.921295+0000 mgr.y (mgr.14120) 17 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.921295+0000 mgr.y (mgr.14120) 17 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: cephadm 2026-03-10T14:45:50.922302+0000 mgr.y (mgr.14120) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: cephadm 2026-03-10T14:45:50.922302+0000 mgr.y (mgr.14120) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.925765+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:50.925765+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:51.243913+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.958 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:51 vm00 bash[20726]: audit 2026-03-10T14:45:51.243913+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:51.993 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.238304+0000 mgr.y (mgr.14120) 19 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.238304+0000 mgr.y (mgr.14120) 19 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: cephadm 2026-03-10T14:45:51.239507+0000 mgr.y (mgr.14120) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: cephadm 2026-03-10T14:45:51.239507+0000 mgr.y (mgr.14120) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.576451+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/2492060380' entity='client.admin' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.576451+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/2492060380' entity='client.admin' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.927033+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1009410406' entity='client.admin' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:51.927033+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/1009410406' entity='client.admin' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.144317+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.144317+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.289247+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/768964531' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.289247+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/768964531' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.481337+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:52.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:52 vm00 bash[20726]: audit 2026-03-10T14:45:52.481337+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14120 192.168.123.100:0/1622020882' entity='mgr.y' 2026-03-10T14:45:53.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: ignoring --setuser ceph since I am not root 2026-03-10T14:45:53.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: ignoring --setgroup ceph since I am not root 2026-03-10T14:45:53.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: debug 2026-03-10T14:45:53.275+0000 7f060d1e3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:45:53.470 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: debug 2026-03-10T14:45:53.323+0000 7f060d1e3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T14:45:53.624 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 9... 2026-03-10T14:45:53.856 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: debug 2026-03-10T14:45:53.471+0000 7f060d1e3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:45:54.150 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:53 vm00 bash[21005]: debug 2026-03-10T14:45:53.847+0000 7f060d1e3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.343+0000 7f060d1e3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: audit 2026-03-10T14:45:53.146182+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/768964531' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: audit 2026-03-10T14:45:53.146182+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/768964531' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: cluster 2026-03-10T14:45:53.149348+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: cluster 2026-03-10T14:45:53.149348+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: audit 2026-03-10T14:45:53.557570+0000 mon.a (mon.0) 73 : audit [DBG] from='client.? 192.168.123.100:0/1828742981' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:45:54.441 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:54 vm00 bash[20726]: audit 2026-03-10T14:45:53.557570+0000 mon.a (mon.0) 73 : audit [DBG] from='client.? 192.168.123.100:0/1828742981' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:45:54.720 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.431+0000 7f060d1e3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:45:54.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:45:54.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:45:54.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: from numpy import show_config as show_numpy_config 2026-03-10T14:45:54.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.571+0000 7f060d1e3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:45:55.220 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.727+0000 7f060d1e3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:45:55.220 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.767+0000 7f060d1e3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:45:55.220 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.811+0000 7f060d1e3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:45:55.220 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.859+0000 7f060d1e3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:45:55.221 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:54 vm00 bash[21005]: debug 2026-03-10T14:45:54.915+0000 7f060d1e3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:45:55.670 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.387+0000 7f060d1e3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:45:55.670 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.423+0000 7f060d1e3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:45:55.670 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.463+0000 7f060d1e3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:45:55.670 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.615+0000 7f060d1e3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:45:55.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.663+0000 7f060d1e3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:45:55.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.711+0000 7f060d1e3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:45:55.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.827+0000 7f060d1e3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:56.267 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:55 vm00 bash[21005]: debug 2026-03-10T14:45:55.987+0000 7f060d1e3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:45:56.268 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:56 vm00 bash[21005]: debug 2026-03-10T14:45:56.159+0000 7f060d1e3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:45:56.268 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:56 vm00 bash[21005]: debug 2026-03-10T14:45:56.211+0000 7f060d1e3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:45:56.679 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:56 vm00 bash[21005]: debug 2026-03-10T14:45:56.259+0000 7f060d1e3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:45:56.679 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:56 vm00 bash[21005]: debug 2026-03-10T14:45:56.419+0000 7f060d1e3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:45:56.970 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:45:56 vm00 bash[21005]: debug 2026-03-10T14:45:56.671+0000 7f060d1e3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.679231+0000 mon.a (mon.0) 74 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.679231+0000 mon.a (mon.0) 74 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.679514+0000 mon.a (mon.0) 75 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.679514+0000 mon.a (mon.0) 75 : cluster [INF] Activating manager daemon y 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.684623+0000 mon.a (mon.0) 76 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.684623+0000 mon.a (mon.0) 76 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.684751+0000 mon.a (mon.0) 77 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00536914s) 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.684751+0000 mon.a (mon.0) 77 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00536914s) 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.686947+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.686947+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687026+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687026+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687738+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687738+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687798+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687798+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687947+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.687947+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.694290+0000 mon.a (mon.0) 83 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: cluster 2026-03-10T14:45:56.694290+0000 mon.a (mon.0) 83 : cluster [INF] Manager daemon y is now available 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.713061+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:56.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:56 vm00 bash[20726]: audit 2026-03-10T14:45:56.713061+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 9 is available 2026-03-10T14:45:57.906 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T14:45:58.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:57 vm00 bash[20726]: audit 2026-03-10T14:45:56.739740+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:58.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:57 vm00 bash[20726]: audit 2026-03-10T14:45:56.739740+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:45:58.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:57 vm00 bash[20726]: audit 2026-03-10T14:45:56.740658+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:58.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:57 vm00 bash[20726]: audit 2026-03-10T14:45:56.740658+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:45:58.270 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T14:45:58.270 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T14:45:58.942 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$xB/VBeR2ZSCR9wfahOpiZO755nvb30MjHf.CIuWi97m5Sms5O8xn.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773153958, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T14:45:58.942 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T14:45:58.956 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cluster 2026-03-10T14:45:57.803275+0000 mon.a (mon.0) 87 : cluster [DBG] mgrmap e11: y(active, since 1.12389s) 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cluster 2026-03-10T14:45:57.803275+0000 mon.a (mon.0) 87 : cluster [DBG] mgrmap e11: y(active, since 1.12389s) 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:57.845146+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [10/Mar/2026:14:45:57] ENGINE Bus STARTING 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:57.845146+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [10/Mar/2026:14:45:57] ENGINE Bus STARTING 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:57.946554+0000 mgr.y (mgr.14152) 4 : cephadm [INF] [10/Mar/2026:14:45:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:57.946554+0000 mgr.y (mgr.14152) 4 : cephadm [INF] [10/Mar/2026:14:45:57] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057378+0000 mgr.y (mgr.14152) 5 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057378+0000 mgr.y (mgr.14152) 5 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057471+0000 mgr.y (mgr.14152) 6 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Bus STARTED 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057471+0000 mgr.y (mgr.14152) 6 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Bus STARTED 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057786+0000 mgr.y (mgr.14152) 7 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Client ('192.168.123.100', 34650) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: cephadm 2026-03-10T14:45:58.057786+0000 mgr.y (mgr.14152) 7 : cephadm [INF] [10/Mar/2026:14:45:58] ENGINE Client ('192.168.123.100', 34650) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.162182+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.162182+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.221105+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.221105+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.224642+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.224642+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.670468+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:58.957 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:58 vm00 bash[20726]: audit 2026-03-10T14:45:58.670468+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:45:59.242 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T14:45:59.242 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:45:59.242 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout: Password: yxs7tc2yhq 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.243 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config directory 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:45:59.582 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T14:45:59.601 INFO:tasks.cephadm:Fetching config... 2026-03-10T14:45:59.601 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:45:59.601 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T14:45:59.604 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T14:45:59.605 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:45:59.605 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T14:45:59.651 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T14:45:59.652 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:45:59.652 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/keyring of=/dev/stdout 2026-03-10T14:45:59.701 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T14:45:59.701 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:45:59.701 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T14:45:59.748 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T14:45:59.748 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo837u2M0zBeQnAZnO0vd6ldtZNI+ckiCye1sxcu/i+ ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T14:45:59.803 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo837u2M0zBeQnAZnO0vd6ldtZNI+ckiCye1sxcu/i+ ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:59.811 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo837u2M0zBeQnAZnO0vd6ldtZNI+ckiCye1sxcu/i+ ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T14:45:59.825 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo837u2M0zBeQnAZnO0vd6ldtZNI+ckiCye1sxcu/i+ ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:45:59.831 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T14:46:00.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:58.513714+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:58.513714+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:59.188171+0000 mon.a (mon.0) 91 : audit [DBG] from='client.? 192.168.123.100:0/2496125467' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:59.188171+0000 mon.a (mon.0) 91 : audit [DBG] from='client.? 192.168.123.100:0/2496125467' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:59.542121+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.100:0/2169151767' entity='client.admin' 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: audit 2026-03-10T14:45:59.542121+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.100:0/2169151767' entity='client.admin' 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: cluster 2026-03-10T14:45:59.675101+0000 mon.a (mon.0) 93 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T14:46:00.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:45:59 vm00 bash[20726]: cluster 2026-03-10T14:45:59.675101+0000 mon.a (mon.0) 93 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T14:46:02.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:02 vm00 bash[20726]: audit 2026-03-10T14:46:01.622413+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:02.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:02 vm00 bash[20726]: audit 2026-03-10T14:46:01.622413+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:02.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:02 vm00 bash[20726]: audit 2026-03-10T14:46:02.261130+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:02.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:02 vm00 bash[20726]: audit 2026-03-10T14:46:02.261130+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:04.457 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/config 2026-03-10T14:46:04.760 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:04 vm00 bash[20726]: cluster 2026-03-10T14:46:03.633554+0000 mon.a (mon.0) 96 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T14:46:04.761 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:04 vm00 bash[20726]: cluster 2026-03-10T14:46:03.633554+0000 mon.a (mon.0) 96 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T14:46:04.805 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T14:46:04.805 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T14:46:06.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:05 vm00 bash[20726]: audit 2026-03-10T14:46:04.733451+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.100:0/55613922' entity='client.admin' 2026-03-10T14:46:06.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:05 vm00 bash[20726]: audit 2026-03-10T14:46:04.733451+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.100:0/55613922' entity='client.admin' 2026-03-10T14:46:08.468 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/config 2026-03-10T14:46:08.922 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm03 2026-03-10T14:46:08.922 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:46:08.922 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.conf 2026-03-10T14:46:08.925 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:46:08.925 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:08.969 INFO:tasks.cephadm:Adding host vm03 to orchestrator... 2026-03-10T14:46:08.969 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch host add vm03 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.417431+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.417431+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.442324+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.442324+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.443218+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.443218+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.449130+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.449130+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.455684+0000 mon.a (mon.0) 102 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.455684+0000 mon.a (mon.0) 102 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.468494+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.468494+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.762998+0000 mgr.y (mgr.14152) 10 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.762998+0000 mgr.y (mgr.14152) 10 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.766661+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.766661+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.767664+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.767664+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.768836+0000 mon.a (mon.0) 106 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.768836+0000 mon.a (mon.0) 106 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.769325+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.769325+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.770016+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.770016+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.808628+0000 mgr.y (mgr.14152) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.808628+0000 mgr.y (mgr.14152) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.856404+0000 mgr.y (mgr.14152) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.856404+0000 mgr.y (mgr.14152) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.904871+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: cephadm 2026-03-10T14:46:08.904871+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.952199+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.952199+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.955693+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.955693+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.962330+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:09.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:09 vm00 bash[20726]: audit 2026-03-10T14:46:08.962330+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:13.582 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/config 2026-03-10T14:46:14.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:14 vm00 bash[20726]: audit 2026-03-10T14:46:13.846317+0000 mgr.y (mgr.14152) 15 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:14.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:14 vm00 bash[20726]: audit 2026-03-10T14:46:13.846317+0000 mgr.y (mgr.14152) 15 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:15.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:15 vm00 bash[20726]: cephadm 2026-03-10T14:46:14.479901+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T14:46:15.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:15 vm00 bash[20726]: cephadm 2026-03-10T14:46:14.479901+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T14:46:15.841 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm03' with addr '192.168.123.103' 2026-03-10T14:46:15.954 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch host ls --format=json 2026-03-10T14:46:17.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:15.836779+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:17.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:15.836779+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: cephadm 2026-03-10T14:46:15.837480+0000 mgr.y (mgr.14152) 17 : cephadm [INF] Added host vm03 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: cephadm 2026-03-10T14:46:15.837480+0000 mgr.y (mgr.14152) 17 : cephadm [INF] Added host vm03 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:15.837801+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:15.837801+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:16.169216+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:17.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:16 vm00 bash[20726]: audit 2026-03-10T14:46:16.169216+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:18.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:17 vm00 bash[20726]: cluster 2026-03-10T14:46:16.688867+0000 mgr.y (mgr.14152) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:18.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:17 vm00 bash[20726]: cluster 2026-03-10T14:46:16.688867+0000 mgr.y (mgr.14152) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:18.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:17 vm00 bash[20726]: audit 2026-03-10T14:46:17.498434+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:18.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:17 vm00 bash[20726]: audit 2026-03-10T14:46:17.498434+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:19.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:19 vm00 bash[20726]: audit 2026-03-10T14:46:18.076655+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:19.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:19 vm00 bash[20726]: audit 2026-03-10T14:46:18.076655+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:20.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:20 vm00 bash[20726]: cluster 2026-03-10T14:46:18.689058+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:20.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:20 vm00 bash[20726]: cluster 2026-03-10T14:46:18.689058+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:20.576 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/config 2026-03-10T14:46:20.846 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:46:20.846 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}] 2026-03-10T14:46:20.914 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T14:46:20.914 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd crush tunables default 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: cluster 2026-03-10T14:46:20.689238+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: cluster 2026-03-10T14:46:20.689238+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:20.843603+0000 mgr.y (mgr.14152) 21 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:20.843603+0000 mgr.y (mgr.14152) 21 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.206991+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.206991+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.209693+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.209693+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.213334+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.213334+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.215732+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.215732+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.216374+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.216374+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.217465+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.217465+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.218615+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.218615+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.346997+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.346997+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.349903+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.349903+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.352868+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:22.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:22 vm00 bash[20726]: audit 2026-03-10T14:46:21.352868+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:23.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.219733+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.219733+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.250051+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.250051+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.278761+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.278761+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.309853+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:46:23.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:23 vm00 bash[20726]: cephadm 2026-03-10T14:46:21.309853+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:46:24.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:24 vm00 bash[20726]: cluster 2026-03-10T14:46:22.689485+0000 mgr.y (mgr.14152) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:24.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:24 vm00 bash[20726]: cluster 2026-03-10T14:46:22.689485+0000 mgr.y (mgr.14152) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:24.584 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.a/config 2026-03-10T14:46:25.221 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T14:46:25.281 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-10T14:46:25.281 INFO:tasks.cephadm:Adding mon.c on vm00 2026-03-10T14:46:25.281 INFO:tasks.cephadm:Adding mon.b on vm03 2026-03-10T14:46:25.282 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply mon '3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b' 2026-03-10T14:46:25.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:25 vm00 bash[20726]: audit 2026-03-10T14:46:24.830385+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3045770547' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T14:46:25.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:25 vm00 bash[20726]: audit 2026-03-10T14:46:24.830385+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3045770547' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T14:46:26.403 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:26.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: cluster 2026-03-10T14:46:24.689724+0000 mgr.y (mgr.14152) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:26.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: cluster 2026-03-10T14:46:24.689724+0000 mgr.y (mgr.14152) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:26.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: audit 2026-03-10T14:46:25.218791+0000 mon.a (mon.0) 127 : audit [INF] from='client.? 192.168.123.100:0/3045770547' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T14:46:26.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: audit 2026-03-10T14:46:25.218791+0000 mon.a (mon.0) 127 : audit [INF] from='client.? 192.168.123.100:0/3045770547' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T14:46:26.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: cluster 2026-03-10T14:46:25.220320+0000 mon.a (mon.0) 128 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:26.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:26 vm00 bash[20726]: cluster 2026-03-10T14:46:25.220320+0000 mon.a (mon.0) 128 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:26.653 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mon update... 2026-03-10T14:46:26.716 DEBUG:teuthology.orchestra.run.vm00:mon.c> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.c.service 2026-03-10T14:46:26.717 DEBUG:teuthology.orchestra.run.vm03:mon.b> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.b.service 2026-03-10T14:46:26.718 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T14:46:26.718 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph mon dump -f json 2026-03-10T14:46:27.888 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.646128+0000 mgr.y (mgr.14152) 28 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.646128+0000 mgr.y (mgr.14152) 28 : audit [DBG] from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cephadm 2026-03-10T14:46:26.647326+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cephadm 2026-03-10T14:46:26.647326+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm00:[v2:192.168.123.100:3301,v1:192.168.123.100:6790]=c;vm03:192.168.123.103=b;count:3 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.649890+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.649890+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.650340+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.650340+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.651261+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.651261+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.651651+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.651651+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.654103+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.654103+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.654911+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.654911+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.655302+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: audit 2026-03-10T14:46:26.655302+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cephadm 2026-03-10T14:46:26.655740+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cephadm 2026-03-10T14:46:26.655740+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cluster 2026-03-10T14:46:26.689976+0000 mgr.y (mgr.14152) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:27.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:27 vm00 bash[20726]: cluster 2026-03-10T14:46:26.689976+0000 mgr.y (mgr.14152) 31 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:28.555 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:28 vm03 bash[23394]: debug 2026-03-10T14:46:28.425+0000 7f77d6da9640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T14:46:29.181 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:29.181 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:29.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:29 vm00 bash[28403]: debug 2026-03-10T14:46:29.391+0000 7fdbbd708640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T14:46:29.471 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:29.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:33.454 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:46:33.454 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":2,"fsid":"93bd26bc-1c8f-11f1-8404-610ce866bde7","modified":"2026-03-10T14:46:28.428565Z","created":"2026-03-10T14:45:30.662441Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T14:46:33.454 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 2 2026-03-10T14:46:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.878 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:33 vm03 bash[23394]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:33.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:33 vm00 bash[20726]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:34.536 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T14:46:34.536 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph mon dump -f json 2026-03-10T14:46:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:34 vm03 bash[23394]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:34 vm03 bash[23394]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:34 vm03 bash[23394]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:34 vm03 bash[23394]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:34.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:34 vm00 bash[20726]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:34.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:34 vm00 bash[20726]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:34.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:34 vm00 bash[20726]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:34.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:34 vm00 bash[20726]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:35.720 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:46:35 vm00 bash[21005]: debug 2026-03-10T14:46:35.423+0000 7f05d954f640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T14:46:38.296 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:40.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:40 vm03 bash[23394]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cephadm 2026-03-10T14:46:28.274799+0000 mgr.y (mgr.14152) 32 : cephadm [INF] Deploying daemon mon.c on vm00 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.431520+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.431647+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:28.431891+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:28.502372+0000 mon.a (mon.0) 145 : audit [DBG] from='client.? 192.168.123.103:0/694533024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:28.690184+0000 mgr.y (mgr.14152) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:29.400328+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:29.427589+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:30.400624+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:30.428162+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:40 vm00 bash[20726]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:30.433097+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:30.690438+0000 mgr.y (mgr.14152) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:31.400882+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:31.428326+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:32.400876+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:32.428207+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:32.690718+0000 mgr.y (mgr.14152) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.400792+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.428381+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.445024+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449378+0000 mon.a (mon.0) 157 : cluster [DBG] monmap epoch 2 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449426+0000 mon.a (mon.0) 158 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449465+0000 mon.a (mon.0) 159 : cluster [DBG] last_changed 2026-03-10T14:46:28.428565+0000 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449504+0000 mon.a (mon.0) 160 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449542+0000 mon.a (mon.0) 161 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449581+0000 mon.a (mon.0) 162 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449620+0000 mon.a (mon.0) 163 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.449658+0000 mon.a (mon.0) 164 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:40.974 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450153+0000 mon.a (mon.0) 165 : cluster [DBG] fsmap 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450230+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450468+0000 mon.a (mon.0) 167 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:33.450761+0000 mon.a (mon.0) 168 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.457976+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.466905+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.473488+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.478241+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:33.490664+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:34.400989+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:34.428318+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.492122+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:35.492652+0000 mon.a (mon.0) 178 : cluster [INF] mon.a calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.541376+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:35.541637+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:35.600820+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:36.401368+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:36.691143+0000 mgr.y (mgr.14152) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:37.401798+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:37.403934+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:38.401647+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.975 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:38.691358+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:39.401706+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.401951+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.545291+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550814+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550839+0000 mon.a (mon.0) 188 : cluster [DBG] fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550851+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-10T14:46:35.401630+0000 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550861+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-10T14:45:30.662441+0000 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550873+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550882+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550894+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550904+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.550918+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551351+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551383+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551563+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: cluster 2026-03-10T14:46:40.551669+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.576773+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.582273+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.587799+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.976 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.594332+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.599026+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.600179+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:40.977 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:40 vm00 bash[28403]: audit 2026-03-10T14:46:40.600903+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:41.611 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 3 2026-03-10T14:46:41.611 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:46:41.611 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":3,"fsid":"93bd26bc-1c8f-11f1-8404-610ce866bde7","modified":"2026-03-10T14:46:35.401630Z","created":"2026-03-10T14:45:30.662441Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3301","nonce":0},{"type":"v1","addr":"192.168.123.100:6790","nonce":0}]},"addr":"192.168.123.100:6790/0","public_addr":"192.168.123.100:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-10T14:46:41.733 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T14:46:41.733 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph config generate-minimal-conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:41 vm00 bash[20726]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:41.973 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:41 vm00 bash[28403]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.601713+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.601859+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.650965+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.657956+0000 mgr.y (mgr.14152) 42 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cluster 2026-03-10T14:46:40.691553+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.693793+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.700211+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.705552+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.710749+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.715188+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.729546+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.733911+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.738098+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.743061+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.743524+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.743901+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.744573+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.118 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:40.745101+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:40.745894+0000 mgr.y (mgr.14152) 45 : cephadm [INF] Reconfiguring daemon mon.c on vm00 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.185509+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.191985+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.192569+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.192874+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.193322+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.193720+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.194265+0000 mgr.y (mgr.14152) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.401838+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.608270+0000 mon.a (mon.0) 225 : audit [DBG] from='client.? 192.168.123.103:0/3652662356' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.610974+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.616669+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.618259+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.618979+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.119 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:41 vm03 bash[23394]: audit 2026-03-10T14:46:41.619477+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.704 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:46:42 vm00 bash[21005]: debug 2026-03-10T14:46:42.395+0000 7f05d954f640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:42 vm00 bash[20726]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:42.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:42.972 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:42 vm00 bash[28403]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.617730+0000 mgr.y (mgr.14152) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: cephadm 2026-03-10T14:46:41.620109+0000 mgr.y (mgr.14152) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.295992+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.300927+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.302149+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.303337+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.303832+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:42 vm03 bash[23394]: audit 2026-03-10T14:46:42.313376+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:43.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:43 vm00 bash[20726]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:43.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:43 vm00 bash[20726]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:43.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:43 vm00 bash[28403]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:43.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:43 vm00 bash[28403]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:44.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:43 vm03 bash[23394]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:44.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:43 vm03 bash[23394]: cluster 2026-03-10T14:46:42.691767+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:45 vm03 bash[23394]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:45 vm03 bash[23394]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:45 vm00 bash[28403]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:45 vm00 bash[28403]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:45 vm00 bash[20726]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:45 vm00 bash[20726]: cluster 2026-03-10T14:46:44.691985+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:46.358 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:46:46.726 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:46.726 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T14:46:46.726 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T14:46:46.726 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] 2026-03-10T14:46:46.799 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T14:46:46.800 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:46:46.800 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T14:46:46.809 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:46:46.809 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:46.860 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:46:46.860 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T14:46:46.868 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:46:46.868 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:46:46.917 INFO:tasks.cephadm:Adding mgr.y on vm00 2026-03-10T14:46:46.917 INFO:tasks.cephadm:Adding mgr.x on vm03 2026-03-10T14:46:46.917 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply mgr '2;vm00=y;vm03=x' 2026-03-10T14:46:47.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:46 vm03 bash[23394]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:47.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:46 vm03 bash[23394]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:47.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:46 vm00 bash[28403]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:47.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:46 vm00 bash[28403]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:47.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:46 vm00 bash[20726]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:47.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:46 vm00 bash[20726]: audit 2026-03-10T14:46:46.722563+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.100:0/4198282711' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:48.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:47 vm00 bash[20726]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:48.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:47 vm00 bash[20726]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:48.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:47 vm00 bash[28403]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:48.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:47 vm00 bash[28403]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:48.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:47 vm03 bash[23394]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:48.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:47 vm03 bash[23394]: cluster 2026-03-10T14:46:46.692186+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:49 vm00 bash[20726]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:49 vm00 bash[20726]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:49 vm00 bash[28403]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:49 vm00 bash[28403]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:49 vm03 bash[23394]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:49 vm03 bash[23394]: cluster 2026-03-10T14:46:48.692395+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:50.567 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:46:50.862 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mgr update... 2026-03-10T14:46:50.945 DEBUG:teuthology.orchestra.run.vm03:mgr.x> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.x.service 2026-03-10T14:46:50.946 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T14:46:50.946 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:46:50.946 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T14:46:50.950 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:46:50.950 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T14:46:50.996 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T14:46:50.996 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T14:46:50.996 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T14:46:50.996 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T14:46:50.996 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T14:46:50.996 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T14:46:50.997 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T14:46:50.997 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T14:46:51.040 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:39:39.015256414 +0000 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:39:38.023256414 +0000 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:39:38.023256414 +0000 2026-03-10T14:46:51.041 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T14:46:51.041 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T14:46:51.093 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:46:51.094 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:46:51.094 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000191749 s, 2.7 MB/s 2026-03-10T14:46:51.094 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T14:46:51.141 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:39:39.027256414 +0000 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.188 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T14:46:51.188 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T14:46:51.237 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:46:51.237 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:46:51.237 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000184033 s, 2.8 MB/s 2026-03-10T14:46:51.237 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.237 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T14:46:51.285 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T14:46:51.331 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:39:39.015256414 +0000 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.332 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T14:46:51.332 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T14:46:51.379 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:46:51.380 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:46:51.380 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000159728 s, 3.2 MB/s 2026-03-10T14:46:51.380 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T14:46:51.424 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:39:39.023256414 +0000 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:39:38.031256414 +0000 2026-03-10T14:46:51.467 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T14:46:51.467 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T14:46:51.515 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:46:51.515 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:46:51.515 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000188041 s, 2.7 MB/s 2026-03-10T14:46:51.516 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T14:46:51.560 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:46:51.560 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T14:46:51.564 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:46:51.564 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-10T14:46:51.608 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-10T14:46:51.608 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-10T14:46:51.608 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-10T14:46:51.608 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-10T14:46:51.608 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-10T14:46:51.608 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T14:46:51.609 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T14:46:51.609 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:40:04.384078048 +0000 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:40:03.308078048 +0000 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:40:03.308078048 +0000 2026-03-10T14:46:51.652 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T14:46:51.652 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.702 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:46:51.704 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:46:51.704 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:46:51.704 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000193963 s, 2.6 MB/s 2026-03-10T14:46:51.707 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T14:46:51.754 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-10T14:46:51.805 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-10T14:46:51.805 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.805 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T14:46:51.805 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.806 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:40:04.392078048 +0000 2026-03-10T14:46:51.806 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:40:03.352078048 +0000 2026-03-10T14:46:51.806 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:40:03.352078048 +0000 2026-03-10T14:46:51.806 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T14:46:51.806 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T14:46:51.863 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:46:51.863 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:46:51.863 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00154175 s, 332 kB/s 2026-03-10T14:46:51.864 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T14:46:51.920 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 systemd[1]: Started Ceph mgr.x for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:51.957 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:51.958 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:51 vm03 bash[23394]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:51.966 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-10T14:46:51.966 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:51.966 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T14:46:51.967 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:51.967 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:40:04.384078048 +0000 2026-03-10T14:46:51.967 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:40:03.312078048 +0000 2026-03-10T14:46:51.967 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:40:03.312078048 +0000 2026-03-10T14:46:51.967 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T14:46:51.967 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T14:46:51.996 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 bash[24110]: debug 2026-03-10T14:46:51.953+0000 7fb5d8ab9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:46:52.022 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:46:52.022 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:46:52.022 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000137618 s, 3.7 MB/s 2026-03-10T14:46:52.023 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T14:46:52.078 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:40:04.392078048 +0000 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:40:03.308078048 +0000 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:40:03.308078048 +0000 2026-03-10T14:46:52.126 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T14:46:52.126 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T14:46:52.176 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:46:52.176 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:46:52.176 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000164116 s, 3.1 MB/s 2026-03-10T14:46:52.177 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:52.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cluster 2026-03-10T14:46:50.693237+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.851629+0000 mgr.y (mgr.14152) 55 : audit [DBG] from='client.24110 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=y;vm03=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cephadm 2026-03-10T14:46:50.852604+0000 mgr.y (mgr.14152) 56 : cephadm [INF] Saving service mgr spec with placement vm00=y;vm03=x;count:2 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.858678+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.860063+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.861322+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.861954+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.867397+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.869582+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:51 vm00 bash[28403]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.871910+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.874989+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:50.875771+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: cephadm 2026-03-10T14:46:50.876532+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon mgr.x on vm03 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.742806+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.747729+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.756393+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.760267+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.223 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:51 vm00 bash[20726]: audit 2026-03-10T14:46:51.771757+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:52.225 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-10T14:46:52.225 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vde 2026-03-10T14:46:52.374 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:51 vm03 bash[24110]: debug 2026-03-10T14:46:51.993+0000 7fb5d8ab9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:46:52.374 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:52 vm03 bash[24110]: debug 2026-03-10T14:46:52.129+0000 7fb5d8ab9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:46:52.874 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:52 vm03 bash[24110]: debug 2026-03-10T14:46:52.449+0000 7fb5d8ab9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:46:53.292 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:52 vm03 bash[24110]: debug 2026-03-10T14:46:52.929+0000 7fb5d8ab9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:46:53.293 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.013+0000 7fb5d8ab9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:46:53.293 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:46:53.293 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:46:53.293 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: from numpy import show_config as show_numpy_config 2026-03-10T14:46:53.293 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.137+0000 7fb5d8ab9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:46:53.624 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.289+0000 7fb5d8ab9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:46:53.624 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.333+0000 7fb5d8ab9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:46:53.624 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.373+0000 7fb5d8ab9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:46:53.624 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.421+0000 7fb5d8ab9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:46:53.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.481+0000 7fb5d8ab9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:46:54.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:53 vm03 bash[23394]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:53 vm03 bash[23394]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.209 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:53 vm03 bash[24110]: debug 2026-03-10T14:46:53.957+0000 7fb5d8ab9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:46:54.209 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.001+0000 7fb5d8ab9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:46:54.209 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.041+0000 7fb5d8ab9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:46:54.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:53 vm00 bash[28403]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:53 vm00 bash[28403]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:53 vm00 bash[20726]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:53 vm00 bash[20726]: cluster 2026-03-10T14:46:52.693427+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:54.465 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.205+0000 7fb5d8ab9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:46:54.465 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.249+0000 7fb5d8ab9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:46:54.465 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.305+0000 7fb5d8ab9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:46:54.874 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.461+0000 7fb5d8ab9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:46:54.874 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.681+0000 7fb5d8ab9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:46:55.190 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.897+0000 7fb5d8ab9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:46:55.190 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.937+0000 7fb5d8ab9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:46:55.190 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:54 vm03 bash[24110]: debug 2026-03-10T14:46:54.989+0000 7fb5d8ab9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:46:55.516 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:55 vm03 bash[24110]: debug 2026-03-10T14:46:55.185+0000 7fb5d8ab9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:46:55.874 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:46:55 vm03 bash[24110]: debug 2026-03-10T14:46:55.513+0000 7fb5d8ab9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:46:56.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:55 vm03 bash[23394]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:55 vm00 bash[28403]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: cluster 2026-03-10T14:46:54.693631+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: cluster 2026-03-10T14:46:55.516486+0000 mon.a (mon.0) 251 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.522413+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.523316+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.524484+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.525226+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.103:0/2999056722' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:55 vm00 bash[20726]: audit 2026-03-10T14:46:55.614391+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:56.848 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:56 vm00 bash[20726]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:56 vm00 bash[28403]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: cluster 2026-03-10T14:46:55.974097+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 59s), standbys: x 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:55.974902+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.810891+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.816567+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.817322+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.817860+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.822843+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.834286+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.834883+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:56 vm03 bash[23394]: audit 2026-03-10T14:46:56.835408+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:57.901 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:46:57.922 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm00:/dev/vde 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:58 vm00 bash[28403]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:58.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:58 vm00 bash[20726]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:58.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cluster 2026-03-10T14:46:56.693837+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cephadm 2026-03-10T14:46:56.834044+0000 mgr.y (mgr.14152) 61 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T14:46:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:58 vm03 bash[23394]: cephadm 2026-03-10T14:46:56.835995+0000 mgr.y (mgr.14152) 62 : cephadm [INF] Reconfiguring daemon mgr.y on vm00 2026-03-10T14:46:59.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:46:59 vm03 bash[23394]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:46:59 vm00 bash[28403]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.254358+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.264850+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.266303+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.267559+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.268024+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: audit 2026-03-10T14:46:58.272587+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:46:59.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:46:59 vm00 bash[20726]: cluster 2026-03-10T14:46:58.694124+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:01 vm03 bash[23394]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:01 vm03 bash[23394]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:01 vm00 bash[28403]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:01 vm00 bash[28403]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:01 vm00 bash[20726]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:01 vm00 bash[20726]: cluster 2026-03-10T14:47:00.694384+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:02.542 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:47:04.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:04 vm00 bash[28403]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:04.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:04 vm00 bash[20726]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:04.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: cluster 2026-03-10T14:47:02.694628+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:04.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.480922+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:04.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.482524+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:04.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:04.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:04 vm03 bash[23394]: audit 2026-03-10T14:47:03.483030+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:05.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:05 vm00 bash[28403]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:05.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:05 vm00 bash[28403]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:05.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:05 vm00 bash[20726]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:05.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:05 vm00 bash[20726]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:05.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:05 vm03 bash[23394]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:05.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:05 vm03 bash[23394]: audit 2026-03-10T14:47:03.479336+0000 mgr.y (mgr.14152) 66 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:06.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:06 vm00 bash[20726]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:06.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:06 vm00 bash[20726]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:06.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:06 vm00 bash[28403]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:06.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:06 vm00 bash[28403]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:06.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:06 vm03 bash[23394]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:06.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:06 vm03 bash[23394]: cluster 2026-03-10T14:47:04.694958+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.413 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:08 vm00 bash[28403]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.413 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:08 vm00 bash[28403]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:08 vm00 bash[20726]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:08 vm00 bash[20726]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:08 vm03 bash[23394]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:08.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:08 vm03 bash[23394]: cluster 2026-03-10T14:47:06.695176+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:10 vm00 bash[28403]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:10.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:10 vm00 bash[20726]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: cluster 2026-03-10T14:47:08.695453+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.559344+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.100:0/399741475' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.559841+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.562974+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1ba9a14-6c50-4bf4-bfa2-935d1c099357"}]': finished 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: cluster 2026-03-10T14:47:09.565927+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:10 vm03 bash[23394]: audit 2026-03-10T14:47:09.566097+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:11 vm03 bash[23394]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:11 vm03 bash[23394]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:11.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:11 vm00 bash[28403]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:11.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:11 vm00 bash[28403]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:11.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:11 vm00 bash[20726]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:11.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:11 vm00 bash[20726]: audit 2026-03-10T14:47:10.206409+0000 mon.c (mon.2) 4 : audit [DBG] from='client.? 192.168.123.100:0/619735606' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:12.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:12 vm03 bash[23394]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:12 vm03 bash[23394]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:12.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:12 vm00 bash[28403]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:12.728 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:12 vm00 bash[28403]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:12.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:12 vm00 bash[20726]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:12.729 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:12 vm00 bash[20726]: cluster 2026-03-10T14:47:10.695714+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:13 vm03 bash[23394]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:13 vm03 bash[23394]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:13 vm00 bash[28403]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:13 vm00 bash[28403]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:13 vm00 bash[20726]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:13.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:13 vm00 bash[20726]: cluster 2026-03-10T14:47:12.696018+0000 mgr.y (mgr.14152) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:16 vm03 bash[23394]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:16 vm03 bash[23394]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:16 vm00 bash[28403]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:16 vm00 bash[28403]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:16 vm00 bash[20726]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:16.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:16 vm00 bash[20726]: cluster 2026-03-10T14:47:14.696296+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:18 vm03 bash[23394]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:18 vm03 bash[23394]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:18 vm00 bash[28403]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:18 vm00 bash[28403]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:18 vm00 bash[20726]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:18.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:18 vm00 bash[20726]: cluster 2026-03-10T14:47:16.696500+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.668 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:20 vm00 bash[28403]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.668 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:20 vm00 bash[28403]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.668 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:20 vm00 bash[20726]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.668 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:20 vm00 bash[20726]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:20 vm03 bash[23394]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:20.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:20 vm03 bash[23394]: cluster 2026-03-10T14:47:18.696712+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 bash[28403]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.616 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.617 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 bash[20726]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.617 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: cluster 2026-03-10T14:47:20.696968+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: audit 2026-03-10T14:47:20.724073+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: audit 2026-03-10T14:47:20.724778+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:21 vm03 bash[23394]: cephadm 2026-03-10T14:47:20.725251+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T14:47:21.970 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:21.971 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:21.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:21 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:22.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.491 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:22 vm00 bash[20726]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.492 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:22 vm00 bash[28403]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:21.825294+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:22.041120+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:22 vm03 bash[23394]: audit 2026-03-10T14:47:22.133187+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:23.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:23 vm00 bash[20726]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:23.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:23 vm00 bash[20726]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:23.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:23 vm00 bash[28403]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:23.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:23 vm00 bash[28403]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:23 vm03 bash[23394]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:23 vm03 bash[23394]: cluster 2026-03-10T14:47:22.697220+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.042 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:25 vm00 bash[28403]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.043 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:25 vm00 bash[28403]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:25 vm00 bash[20726]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.043 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:25 vm00 bash[20726]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:25 vm03 bash[23394]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:26.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:25 vm03 bash[23394]: cluster 2026-03-10T14:47:24.697451+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:27.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:26 vm03 bash[23394]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:26 vm03 bash[23394]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:26 vm03 bash[23394]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:26 vm03 bash[23394]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:26 vm00 bash[28403]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:26 vm00 bash[28403]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:26 vm00 bash[28403]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:26 vm00 bash[28403]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:26 vm00 bash[20726]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:26 vm00 bash[20726]: audit 2026-03-10T14:47:26.044537+0000 mon.c (mon.2) 5 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:26 vm00 bash[20726]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.208 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:26 vm00 bash[20726]: audit 2026-03-10T14:47:26.045039+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:47:27.273 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-10T14:47:27.354 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.0.service 2026-03-10T14:47:27.355 INFO:tasks.cephadm:Deploying osd.1 on vm00 with /dev/vdd... 2026-03-10T14:47:27.356 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdd 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:28 vm03 bash[23394]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:28 vm00 bash[28403]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: cluster 2026-03-10T14:47:26.697708+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.848763+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:47:28.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.856937+0000 mon.c (mon.2) 6 : audit [INF] from='osd.0 v2:192.168.123.100:6801/1492812989' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: cluster 2026-03-10T14:47:26.859695+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.859965+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:26.860356+0000 mon.a (mon.0) 285 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:27.262272+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:28.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:28 vm00 bash[20726]: audit 2026-03-10T14:47:27.268548+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:29 vm03 bash[23394]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.669 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:29 vm00 bash[28403]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:27.007742+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:27.007813+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.141930+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:28.145160+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.146977+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.201408+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.303713+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:28.323828+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: cluster 2026-03-10T14:47:28.697999+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.670 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:29 vm00 bash[20726]: audit 2026-03-10T14:47:29.199017+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:29.971 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:47:29 vm00 bash[31304]: debug 2026-03-10T14:47:29.659+0000 7f51c1f4a640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T14:47:29.971 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:47:29 vm00 bash[31304]: debug 2026-03-10T14:47:29.671+0000 7f51bd573640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:47:30.970 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:30 vm00 bash[28403]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:30 vm00 bash[28403]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:30 vm00 bash[28403]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:30 vm00 bash[28403]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:30 vm00 bash[20726]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:30 vm00 bash[20726]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:30 vm00 bash[20726]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:30.971 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:30 vm00 bash[20726]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:31.108 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:47:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:30 vm03 bash[23394]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:30 vm03 bash[23394]: audit 2026-03-10T14:47:29.670399+0000 mon.a (mon.0) 295 : audit [INF] from='osd.0 ' entity='osd.0' 2026-03-10T14:47:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:30 vm03 bash[23394]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:30 vm03 bash[23394]: audit 2026-03-10T14:47:30.199085+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:31 vm00 bash[28403]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.157 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:32.158 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:31 vm00 bash[20726]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:32.181 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:47:32.194 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm00:/dev/vdd 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.673985+0000 mon.a (mon.0) 297 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.674068+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: audit 2026-03-10T14:47:30.674671+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:31 vm03 bash[23394]: cluster 2026-03-10T14:47:30.698245+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:47:33.305 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:33 vm00 bash[28403]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:33.305 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:33 vm00 bash[28403]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:33.305 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:33 vm00 bash[20726]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:33.305 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:33 vm00 bash[20726]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:33 vm03 bash[23394]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:33 vm03 bash[23394]: cluster 2026-03-10T14:47:31.904151+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:34 vm00 bash[28403]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:34 vm00 bash[20726]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: cluster 2026-03-10T14:47:32.698552+0000 mgr.y (mgr.14152) 82 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.966751+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.972342+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.973294+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.974090+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.974605+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.979025+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.991019+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.992035+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.992657+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:34 vm03 bash[23394]: audit 2026-03-10T14:47:33.996719+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:35.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:35 vm03 bash[23394]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:35 vm03 bash[23394]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:35.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:35 vm00 bash[28403]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:35 vm00 bash[28403]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:35 vm00 bash[20726]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:35 vm00 bash[20726]: cephadm 2026-03-10T14:47:33.960649+0000 mgr.y (mgr.14152) 83 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:47:36.126 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:47:36.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:36 vm03 bash[23394]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:36 vm03 bash[23394]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:36.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:36 vm00 bash[20726]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:36.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:36 vm00 bash[20726]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:36 vm00 bash[28403]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:36 vm00 bash[28403]: cluster 2026-03-10T14:47:34.698817+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:37.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:37 vm03 bash[23394]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:37.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:37 vm00 bash[20726]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.393118+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.394327+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:37.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:37 vm00 bash[28403]: audit 2026-03-10T14:47:36.394702+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:38.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:38 vm03 bash[23394]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:38 vm03 bash[23394]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:38 vm03 bash[23394]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:38.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:38 vm03 bash[23394]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:38.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:38 vm00 bash[28403]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:38 vm00 bash[28403]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:38 vm00 bash[28403]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:38 vm00 bash[28403]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:38 vm00 bash[20726]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:38 vm00 bash[20726]: audit 2026-03-10T14:47:36.391759+0000 mgr.y (mgr.14152) 85 : audit [DBG] from='client.24137 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:38 vm00 bash[20726]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:38 vm00 bash[20726]: cluster 2026-03-10T14:47:36.699057+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:40 vm03 bash[23394]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:40 vm03 bash[23394]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:40 vm00 bash[28403]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:40 vm00 bash[28403]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:40 vm00 bash[20726]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:40.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:40 vm00 bash[20726]: cluster 2026-03-10T14:47:38.699377+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.470 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:42 vm00 bash[28403]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:42.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:42 vm00 bash[20726]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: cluster 2026-03-10T14:47:40.699627+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.769896+0000 mon.c (mon.2) 7 : audit [INF] from='client.? 192.168.123.100:0/1842361802' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.770141+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.773044+0000 mon.a (mon.0) 315 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d926117c-9bf7-44cb-8796-78132bdc13d6"}]': finished 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: cluster 2026-03-10T14:47:41.776254+0000 mon.a (mon.0) 316 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:42 vm03 bash[23394]: audit 2026-03-10T14:47:41.776957+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:43.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:43 vm00 bash[28403]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:43.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:43 vm00 bash[28403]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:43.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:43 vm00 bash[20726]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:43.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:43 vm00 bash[20726]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:43 vm03 bash[23394]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:43 vm03 bash[23394]: audit 2026-03-10T14:47:42.599476+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/4249974305' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:47:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:44 vm03 bash[23394]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:44 vm03 bash[23394]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:44.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:44 vm00 bash[28403]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:44.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:44 vm00 bash[28403]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:44.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:44 vm00 bash[20726]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:44.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:44 vm00 bash[20726]: cluster 2026-03-10T14:47:42.699955+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:45 vm03 bash[23394]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:45 vm03 bash[23394]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:45 vm00 bash[28403]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:45 vm00 bash[28403]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:45 vm00 bash[20726]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:45.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:45 vm00 bash[20726]: cluster 2026-03-10T14:47:44.700314+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:47 vm03 bash[23394]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:47 vm03 bash[23394]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:47 vm00 bash[28403]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:47 vm00 bash[28403]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:47 vm00 bash[20726]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:48.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:47 vm00 bash[20726]: cluster 2026-03-10T14:47:46.700643+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:49 vm03 bash[23394]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:49 vm03 bash[23394]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:49 vm00 bash[28403]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:49 vm00 bash[28403]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:49 vm00 bash[20726]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:50.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:49 vm00 bash[20726]: cluster 2026-03-10T14:47:48.700931+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:51 vm03 bash[23394]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:51 vm00 bash[20726]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.128 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: cluster 2026-03-10T14:47:50.701172+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: audit 2026-03-10T14:47:51.609906+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:47:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.129 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:51 vm00 bash[28403]: audit 2026-03-10T14:47:51.610527+0000 mon.a (mon.0) 320 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:52.721 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:47:52 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 bash[20726]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 bash[20726]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 bash[20726]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:52 vm00 bash[20726]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 bash[28403]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 bash[28403]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 bash[28403]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:53.171 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:52 vm00 bash[28403]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:52 vm03 bash[23394]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:52 vm03 bash[23394]: cephadm 2026-03-10T14:47:51.611043+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Deploying daemon osd.1 on vm00 2026-03-10T14:47:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:52 vm03 bash[23394]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:52 vm03 bash[23394]: audit 2026-03-10T14:47:52.783651+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:53 vm00 bash[28403]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:53 vm00 bash[20726]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: cluster 2026-03-10T14:47:52.701450+0000 mgr.y (mgr.14152) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: audit 2026-03-10T14:47:52.874531+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:53 vm03 bash[23394]: audit 2026-03-10T14:47:52.910770+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:47:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:56 vm00 bash[28403]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:56.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:56 vm00 bash[28403]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:56 vm00 bash[20726]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:56.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:56 vm00 bash[20726]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:56.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:56 vm03 bash[23394]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:56 vm03 bash[23394]: cluster 2026-03-10T14:47:54.701763+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:57.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:57 vm00 bash[28403]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:57.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:57 vm00 bash[28403]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:57.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:57 vm00 bash[20726]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:57.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:57 vm00 bash[20726]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:57.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:57 vm03 bash[23394]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:57.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:57 vm03 bash[23394]: audit 2026-03-10T14:47:56.716082+0000 mon.a (mon.0) 324 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:59.402 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:47:59 vm00 bash[28403]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:59.403 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:47:59 vm00 bash[20726]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: cluster 2026-03-10T14:47:56.701977+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.224693+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: cluster 2026-03-10T14:47:57.229365+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.229654+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:47:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:47:59 vm03 bash[23394]: audit 2026-03-10T14:47:57.229763+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:01 vm00 bash[20726]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.472 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:01 vm00 bash[28403]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:57.702997+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:57.703053+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.474629+0000 mon.a (mon.0) 329 : audit [INF] from='osd.1 v2:192.168.123.100:6805/198852601' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:58.623727+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T14:48:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: cluster 2026-03-10T14:47:58.702207+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.768869+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:58.772011+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.278514+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.292056+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.295831+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.298306+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.310828+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:01 vm03 bash[23394]: audit 2026-03-10T14:47:59.771495+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:01.645 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 1 on host 'vm00' 2026-03-10T14:48:01.839 DEBUG:teuthology.orchestra.run.vm00:osd.1> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.1.service 2026-03-10T14:48:01.840 INFO:tasks.cephadm:Deploying osd.2 on vm00 with /dev/vdc... 2026-03-10T14:48:01.840 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdc 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:02 vm00 bash[28403]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.209 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.210 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:02 vm00 bash[20726]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.667661+0000 mon.a (mon.0) 339 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.667742+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: cluster 2026-03-10T14:48:00.702409+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:00.832935+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.319957+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.438119+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:02 vm03 bash[23394]: audit 2026-03-10T14:48:01.637263+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:03.970 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:03 vm00 bash[28403]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:03 vm00 bash[28403]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:03 vm00 bash[28403]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:03 vm00 bash[28403]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:03 vm00 bash[20726]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:03 vm00 bash[20726]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:03 vm00 bash[20726]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:03.993 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:03 vm00 bash[20726]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:04.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:03 vm03 bash[23394]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:04.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:03 vm03 bash[23394]: cluster 2026-03-10T14:48:02.692201+0000 mon.a (mon.0) 345 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T14:48:04.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:03 vm03 bash[23394]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:04.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:03 vm03 bash[23394]: cluster 2026-03-10T14:48:02.702608+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:05 vm03 bash[23394]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:05 vm03 bash[23394]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:05 vm00 bash[28403]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:05 vm00 bash[28403]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:05 vm00 bash[20726]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:05 vm00 bash[20726]: cluster 2026-03-10T14:48:04.702928+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:06.465 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:48:08.025 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:48:08.043 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm00:/dev/vdc 2026-03-10T14:48:08.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:07 vm00 bash[28403]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:08.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:07 vm00 bash[28403]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:08.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:07 vm00 bash[20726]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:08.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:07 vm00 bash[20726]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:08.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:07 vm03 bash[23394]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:08.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:07 vm03 bash[23394]: cluster 2026-03-10T14:48:06.703174+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:09 vm03 bash[23394]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:09 vm00 bash[28403]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: cluster 2026-03-10T14:48:08.703417+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: cephadm 2026-03-10T14:48:08.816668+0000 mgr.y (mgr.14152) 104 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.824260+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.831931+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.833430+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.834541+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.835542+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:10.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:09 vm00 bash[20726]: audit 2026-03-10T14:48:08.841242+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:12.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:11 vm03 bash[23394]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:11 vm03 bash[23394]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:11 vm00 bash[28403]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:11 vm00 bash[28403]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:11 vm00 bash[20726]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:11 vm00 bash[20726]: cluster 2026-03-10T14:48:10.703999+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:12.715 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:13 vm00 bash[28403]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:14.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:13 vm00 bash[20726]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:14.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: cluster 2026-03-10T14:48:12.704281+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.985872+0000 mgr.y (mgr.14152) 107 : audit [DBG] from='client.24161 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.987362+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.989034+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:13 vm03 bash[23394]: audit 2026-03-10T14:48:12.989511+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:16.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:15 vm00 bash[28403]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:16.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:15 vm00 bash[28403]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:16.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:15 vm00 bash[20726]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:16.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:15 vm00 bash[20726]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:16.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:15 vm03 bash[23394]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:15 vm03 bash[23394]: cluster 2026-03-10T14:48:14.704535+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:17 vm00 bash[28403]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:17 vm00 bash[28403]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:17 vm00 bash[20726]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:17 vm00 bash[20726]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:17 vm03 bash[23394]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:17 vm03 bash[23394]: cluster 2026-03-10T14:48:16.704820+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:18 vm00 bash[28403]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:19.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:18 vm00 bash[20726]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:19.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.392648+0000 mon.a (mon.0) 355 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.395158+0000 mon.b (mon.1) 7 : audit [INF] from='client.? 192.168.123.100:0/3932799528' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]: dispatch 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.507927+0000 mon.a (mon.0) 356 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ef814fa-4e2d-4d38-94de-a33c6dc06fe1"}]': finished 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: cluster 2026-03-10T14:48:18.515463+0000 mon.a (mon.0) 357 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:19.433 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:18 vm03 bash[23394]: audit 2026-03-10T14:48:18.515724+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:20.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:19 vm00 bash[28403]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:19 vm00 bash[28403]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:19 vm00 bash[28403]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:19 vm00 bash[28403]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:19 vm00 bash[20726]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:19 vm00 bash[20726]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:19 vm00 bash[20726]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:20.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:19 vm00 bash[20726]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:20.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:19 vm03 bash[23394]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:19 vm03 bash[23394]: cluster 2026-03-10T14:48:18.705065+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:20.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:19 vm03 bash[23394]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:20.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:19 vm03 bash[23394]: audit 2026-03-10T14:48:19.149971+0000 mon.a (mon.0) 359 : audit [DBG] from='client.? 192.168.123.100:0/2921896580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:22.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:21 vm00 bash[28403]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:22.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:21 vm00 bash[28403]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:22.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:21 vm00 bash[20726]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:22.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:21 vm00 bash[20726]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:22.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:21 vm03 bash[23394]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:22.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:21 vm03 bash[23394]: cluster 2026-03-10T14:48:20.705354+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:23 vm00 bash[20726]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:23 vm00 bash[20726]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:23 vm00 bash[28403]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:23 vm00 bash[28403]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:23 vm03 bash[23394]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:24.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:23 vm03 bash[23394]: cluster 2026-03-10T14:48:22.705651+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:25 vm00 bash[28403]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:25 vm00 bash[28403]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:25 vm00 bash[20726]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:25 vm00 bash[20726]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:25 vm03 bash[23394]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:26.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:25 vm03 bash[23394]: cluster 2026-03-10T14:48:24.705985+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:27 vm00 bash[28403]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:27 vm00 bash[28403]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:27 vm00 bash[20726]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:27 vm00 bash[20726]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:27 vm03 bash[23394]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:27 vm03 bash[23394]: cluster 2026-03-10T14:48:26.706263+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:28.963 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:28.963 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:28.963 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:28.963 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:28.963 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:28.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:28 vm00 bash[20726]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:29.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:28 vm00 bash[28403]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: audit 2026-03-10T14:48:28.161241+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: audit 2026-03-10T14:48:28.162001+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:28 vm03 bash[23394]: cephadm 2026-03-10T14:48:28.162578+0000 mgr.y (mgr.14152) 115 : cephadm [INF] Deploying daemon osd.2 on vm00 2026-03-10T14:48:29.557 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.557 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.557 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.558 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.558 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.941 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.941 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.941 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:29.941 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:48:29 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:29 vm00 bash[20726]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:29 vm00 bash[28403]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: cluster 2026-03-10T14:48:28.706526+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.680068+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.693981+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:29 vm03 bash[23394]: audit 2026-03-10T14:48:29.705666+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:31 vm03 bash[23394]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:31 vm03 bash[23394]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:32.450 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:31 vm00 bash[20726]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:32.450 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:31 vm00 bash[20726]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:32.450 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:31 vm00 bash[28403]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:32.450 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:31 vm00 bash[28403]: cluster 2026-03-10T14:48:30.706812+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:34 vm03 bash[23394]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:34 vm03 bash[23394]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:34 vm03 bash[23394]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:34 vm03 bash[23394]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:34.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:34 vm00 bash[20726]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:34 vm00 bash[20726]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:34 vm00 bash[20726]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:34 vm00 bash[20726]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:34 vm00 bash[28403]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:34 vm00 bash[28403]: cluster 2026-03-10T14:48:32.707070+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:34 vm00 bash[28403]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:34.556 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:34 vm00 bash[28403]: audit 2026-03-10T14:48:33.416317+0000 mon.a (mon.0) 365 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:35 vm03 bash[23394]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:35 vm00 bash[20726]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:33.996426+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: cluster 2026-03-10T14:48:33.999530+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.000192+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.000904+0000 mon.a (mon.0) 369 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:35.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:35 vm00 bash[28403]: audit 2026-03-10T14:48:34.999575+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:48:36.070 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.070 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.070 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.070 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.070 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:36 vm00 bash[20726]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:36 vm03 bash[23394]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: cluster 2026-03-10T14:48:34.707411+0000 mgr.y (mgr.14152) 119 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: cluster 2026-03-10T14:48:35.004877+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:35.016663+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:35.019233+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:36.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:36 vm00 bash[28403]: audit 2026-03-10T14:48:36.019929+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.307 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 2 on host 'vm00' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.356 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:37 vm00 bash[20726]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.357 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:37 vm00 bash[28403]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: cluster 2026-03-10T14:48:34.391543+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:48:37.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: cluster 2026-03-10T14:48:34.391608+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.084530+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.123461+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.133715+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 v2:192.168.123.100:6809/4087124508' entity='osd.2' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.531087+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.531655+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:36.536759+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:37 vm03 bash[23394]: audit 2026-03-10T14:48:37.019537+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:37.404 DEBUG:teuthology.orchestra.run.vm00:osd.2> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.2.service 2026-03-10T14:48:37.405 INFO:tasks.cephadm:Deploying osd.3 on vm00 with /dev/vdb... 2026-03-10T14:48:37.405 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdb 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:38 vm00 bash[28403]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.471 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:38 vm00 bash[20726]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:36.707681+0000 mgr.y (mgr.14152) 120 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:37.138876+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: cluster 2026-03-10T14:48:37.140120+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.140405+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.289310+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.294465+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:38 vm03 bash[23394]: audit 2026-03-10T14:48:37.299601+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:39 vm00 bash[28403]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:39.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:39 vm00 bash[20726]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:39.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: cluster 2026-03-10T14:48:38.423825+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T14:48:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: cluster 2026-03-10T14:48:38.707951+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:39 vm03 bash[23394]: audit 2026-03-10T14:48:38.741429+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:40 vm00 bash[28403]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:40 vm00 bash[20726]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: audit 2026-03-10T14:48:39.393865+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:40.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: cluster 2026-03-10T14:48:39.401111+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T14:48:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:40 vm03 bash[23394]: audit 2026-03-10T14:48:39.404244+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:41.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:41 vm00 bash[28403]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:41.722 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:41 vm00 bash[20726]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.402750+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: cluster 2026-03-10T14:48:40.404729+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.510040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.528977+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529437+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529514+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.529584+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531539+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531610+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.531811+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.534180+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.550754+0000 mon.c (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.551914+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552164+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552239+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.552295+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: audit 2026-03-10T14:48:40.577253+0000 mon.c (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:41.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:41 vm03 bash[23394]: cluster 2026-03-10T14:48:40.708255+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:42.089 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:48:42.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:42 vm00 bash[28403]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:42 vm00 bash[28403]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:42 vm00 bash[28403]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:42 vm00 bash[28403]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:42 vm00 bash[20726]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:42 vm00 bash[20726]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:42 vm00 bash[20726]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:42.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:42 vm00 bash[20726]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:42.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:42 vm03 bash[23394]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:42 vm03 bash[23394]: cluster 2026-03-10T14:48:41.437921+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T14:48:42.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:42 vm03 bash[23394]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:42.874 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:42 vm03 bash[23394]: cluster 2026-03-10T14:48:41.437965+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T14:48:43.760 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:48:43.779 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm00:/dev/vdb 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:44 vm00 bash[28403]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.221 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:44 vm00 bash[20726]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: cluster 2026-03-10T14:48:42.708512+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: cephadm 2026-03-10T14:48:42.963482+0000 mgr.y (mgr.14152) 124 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.969075+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.975657+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.978489+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.979578+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.980219+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:44 vm03 bash[23394]: audit 2026-03-10T14:48:42.984504+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:48:46.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:46 vm00 bash[20726]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:46.470 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:46 vm00 bash[20726]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:46.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:46 vm00 bash[28403]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:46.471 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:46 vm00 bash[28403]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:46.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:46 vm03 bash[23394]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:46.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:46 vm03 bash[23394]: cluster 2026-03-10T14:48:44.708758+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:47 vm03 bash[23394]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:47 vm03 bash[23394]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:47 vm00 bash[28403]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.720 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:47 vm00 bash[28403]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:47 vm00 bash[20726]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:47.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:47 vm00 bash[20726]: cluster 2026-03-10T14:48:46.709061+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:48.400 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:48 vm03 bash[23394]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:48 vm00 bash[28403]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.728769+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.730184+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:49.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:48 vm00 bash[20726]: audit 2026-03-10T14:48:48.730622+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:48:50.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:49 vm03 bash[23394]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:49 vm03 bash[23394]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:49 vm03 bash[23394]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:50.124 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:49 vm03 bash[23394]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:49 vm00 bash[28403]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:49 vm00 bash[28403]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:49 vm00 bash[28403]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:49 vm00 bash[28403]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:49 vm00 bash[20726]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:49 vm00 bash[20726]: cluster 2026-03-10T14:48:48.709333+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:49 vm00 bash[20726]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:50.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:49 vm00 bash[20726]: audit 2026-03-10T14:48:48.727456+0000 mgr.y (mgr.14152) 128 : audit [DBG] from='client.24194 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:48:52.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:51 vm00 bash[28403]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:52.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:51 vm00 bash[28403]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:52.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:51 vm00 bash[20726]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:52.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:51 vm00 bash[20726]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:52.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:51 vm03 bash[23394]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:52.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:51 vm03 bash[23394]: cluster 2026-03-10T14:48:50.709582+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:53 vm00 bash[28403]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:53 vm00 bash[28403]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:53 vm00 bash[20726]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:53 vm00 bash[20726]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:53 vm03 bash[23394]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:54.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:53 vm03 bash[23394]: cluster 2026-03-10T14:48:52.709840+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:54 vm00 bash[28403]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:55.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:54 vm00 bash[20726]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.250222+0000 mon.a (mon.0) 417 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.251745+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.100:0/1947976208' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.253148+0000 mon.a (mon.0) 418 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "536f0633-b026-45b8-8c47-eb23cccf9b64"}]': finished 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: cluster 2026-03-10T14:48:54.256412+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:54 vm03 bash[23394]: audit 2026-03-10T14:48:54.256552+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:55 vm00 bash[28403]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:55 vm00 bash[28403]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:55 vm00 bash[28403]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:55 vm00 bash[28403]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:55 vm00 bash[20726]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:55 vm00 bash[20726]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:55 vm00 bash[20726]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:56.220 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:55 vm00 bash[20726]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:55 vm03 bash[23394]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:55 vm03 bash[23394]: cluster 2026-03-10T14:48:54.710119+0000 mgr.y (mgr.14152) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:55 vm03 bash[23394]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:55 vm03 bash[23394]: audit 2026-03-10T14:48:54.930394+0000 mon.c (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3168343215' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:48:58.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:57 vm00 bash[28403]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:58.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:57 vm00 bash[28403]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:58.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:57 vm00 bash[20726]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:58.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:57 vm00 bash[20726]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:57 vm03 bash[23394]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:48:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:57 vm03 bash[23394]: cluster 2026-03-10T14:48:56.710352+0000 mgr.y (mgr.14152) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:59 vm00 bash[28403]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:48:59 vm00 bash[28403]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:59 vm00 bash[20726]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:48:59 vm00 bash[20726]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:59 vm03 bash[23394]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:48:59 vm03 bash[23394]: cluster 2026-03-10T14:48:58.710603+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:01 vm00 bash[28403]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:01 vm00 bash[28403]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:01 vm00 bash[20726]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:01 vm00 bash[20726]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:01 vm03 bash[23394]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:01 vm03 bash[23394]: cluster 2026-03-10T14:49:00.710966+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:03 vm00 bash[28403]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:03 vm00 bash[28403]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:03 vm00 bash[20726]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:03 vm00 bash[20726]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:03 vm03 bash[23394]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:03 vm03 bash[23394]: cluster 2026-03-10T14:49:02.711248+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:05.203 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.203 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.203 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.203 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.203 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.203 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 bash[20726]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 bash[28403]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:49:04 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.204 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:49:05 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:05.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: audit 2026-03-10T14:49:04.070479+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:49:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: audit 2026-03-10T14:49:04.071184+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:04 vm03 bash[23394]: cephadm 2026-03-10T14:49:04.071774+0000 mgr.y (mgr.14152) 136 : cephadm [INF] Deploying daemon osd.3 on vm00 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:05 vm00 bash[28403]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:05 vm00 bash[20726]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: cluster 2026-03-10T14:49:04.711526+0000 mgr.y (mgr.14152) 137 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.223096+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.228266+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:05 vm03 bash[23394]: audit 2026-03-10T14:49:05.236872+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:08.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:07 vm00 bash[28403]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:08.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:07 vm00 bash[28403]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:08.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:07 vm00 bash[20726]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:08.219 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:07 vm00 bash[20726]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:08.374 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:07 vm03 bash[23394]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:07 vm03 bash[23394]: cluster 2026-03-10T14:49:06.711897+0000 mgr.y (mgr.14152) 138 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:09 vm03 bash[23394]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:09 vm03 bash[23394]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:09.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:09 vm00 bash[20726]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:09.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:09 vm00 bash[20726]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:09.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:09 vm00 bash[28403]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:09.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:09 vm00 bash[28403]: audit 2026-03-10T14:49:08.723042+0000 mon.a (mon.0) 426 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:10 vm03 bash[23394]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:10 vm00 bash[28403]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: cluster 2026-03-10T14:49:08.712202+0000 mgr.y (mgr.14152) 139 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.091506+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: cluster 2026-03-10T14:49:09.095605+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.095869+0000 mon.a (mon.0) 429 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:10.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:10 vm00 bash[20726]: audit 2026-03-10T14:49:09.095962+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:11 vm00 bash[28403]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:11 vm00 bash[20726]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.094850+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 v2:192.168.123.100:6813/1912373457' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: cluster 2026-03-10T14:49:10.098047+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.099565+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:10.103146+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:11 vm03 bash[23394]: audit 2026-03-10T14:49:11.102493+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.397 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.397 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.397 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.397 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.397 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:12 vm00 bash[20726]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.398 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:12 vm00 bash[28403]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.452 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 3 on host 'vm00' 2026-03-10T14:49:12.536 DEBUG:teuthology.orchestra.run.vm00:osd.3> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.3.service 2026-03-10T14:49:12.537 INFO:tasks.cephadm:Deploying osd.4 on vm03 with /dev/vde... 2026-03-10T14:49:12.537 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vde 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:09.732821+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:09.732876+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:10.712473+0000 mgr.y (mgr.14152) 140 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:11.115776+0000 mon.a (mon.0) 436 : cluster [INF] osd.3 v2:192.168.123.100:6813/1912373457 boot 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: cluster 2026-03-10T14:49:11.115814+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.116716+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.489891+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.495834+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.497391+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.498111+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:12.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:12 vm03 bash[23394]: audit 2026-03-10T14:49:11.503407+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:13 vm00 bash[28403]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:13.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:13 vm00 bash[20726]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.432517+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.438598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: audit 2026-03-10T14:49:12.445970+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:13 vm03 bash[23394]: cluster 2026-03-10T14:49:12.510540+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T14:49:14.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:14 vm00 bash[20726]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:14.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:14 vm00 bash[20726]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:14.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:14 vm00 bash[28403]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:14.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:14 vm00 bash[28403]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:14.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:14 vm03 bash[23394]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:14.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:14 vm03 bash[23394]: cluster 2026-03-10T14:49:12.712763+0000 mgr.y (mgr.14152) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:16 vm00 bash[20726]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:16 vm00 bash[20726]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:16 vm00 bash[28403]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.469 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:16 vm00 bash[28403]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:16 vm03 bash[23394]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:16.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:16 vm03 bash[23394]: cluster 2026-03-10T14:49:14.713073+0000 mgr.y (mgr.14152) 142 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:17.151 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:49:18.322 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:49:18.350 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-10T14:49:18.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:18 vm00 bash[28403]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:18.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:18 vm00 bash[28403]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:18.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:18 vm00 bash[20726]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:18.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:18 vm00 bash[20726]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:18 vm03 bash[23394]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:18 vm03 bash[23394]: cluster 2026-03-10T14:49:16.713347+0000 mgr.y (mgr.14152) 143 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:19 vm03 bash[23394]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:19 vm00 bash[28403]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: cephadm 2026-03-10T14:49:18.304281+0000 mgr.y (mgr.14152) 144 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.316981+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.328520+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.330729+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:19.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.331602+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.332221+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: audit 2026-03-10T14:49:18.336444+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:19.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:19 vm00 bash[20726]: cluster 2026-03-10T14:49:18.713627+0000 mgr.y (mgr.14152) 145 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:21 vm03 bash[23394]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:21 vm03 bash[23394]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:21 vm00 bash[28403]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:21 vm00 bash[28403]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:21 vm00 bash[20726]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:22.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:21 vm00 bash[20726]: cluster 2026-03-10T14:49:20.713932+0000 mgr.y (mgr.14152) 146 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:23.003 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:23 vm00 bash[28403]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:23 vm00 bash[20726]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: cluster 2026-03-10T14:49:22.714247+0000 mgr.y (mgr.14152) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.294132+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.296054+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:23 vm03 bash[23394]: audit 2026-03-10T14:49:23.296531+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:25.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:24 vm00 bash[28403]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:25.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:24 vm00 bash[28403]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:25.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:24 vm00 bash[20726]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:25.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:24 vm00 bash[20726]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:25.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:24 vm03 bash[23394]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:25.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:24 vm03 bash[23394]: audit 2026-03-10T14:49:23.292558+0000 mgr.y (mgr.14152) 148 : audit [DBG] from='client.24178 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:26.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:25 vm00 bash[28403]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:26.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:25 vm00 bash[28403]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:26.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:25 vm00 bash[20726]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:26.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:25 vm00 bash[20726]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:26.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:25 vm03 bash[23394]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:26.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:25 vm03 bash[23394]: cluster 2026-03-10T14:49:24.714595+0000 mgr.y (mgr.14152) 149 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:28 vm00 bash[28403]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:28 vm00 bash[28403]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:28 vm00 bash[20726]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:28 vm00 bash[20726]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:28 vm03 bash[23394]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:28.543 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:28 vm03 bash[23394]: cluster 2026-03-10T14:49:26.714922+0000 mgr.y (mgr.14152) 150 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:29 vm00 bash[28403]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:29.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:29 vm00 bash[20726]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.053868+0000 mon.b (mon.1) 11 : audit [INF] from='client.? 192.168.123.103:0/3211868882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.054500+0000 mon.a (mon.0) 457 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.058492+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4924339-f850-475e-9859-ad7c6a3d2123"}]': finished 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: cluster 2026-03-10T14:49:29.061667+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:29 vm03 bash[23394]: audit 2026-03-10T14:49:29.061808+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:30 vm00 bash[28403]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:30 vm00 bash[28403]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:30 vm00 bash[28403]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:30 vm00 bash[28403]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:30 vm00 bash[20726]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:30 vm00 bash[20726]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:30 vm00 bash[20726]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:30.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:30 vm00 bash[20726]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:30.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:30 vm03 bash[23394]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:30 vm03 bash[23394]: cluster 2026-03-10T14:49:28.715209+0000 mgr.y (mgr.14152) 151 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:30.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:30 vm03 bash[23394]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:30.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:30 vm03 bash[23394]: audit 2026-03-10T14:49:29.794650+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.103:0/3525218876' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:49:32.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:32 vm00 bash[28403]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:32.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:32 vm00 bash[28403]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:32.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:32 vm00 bash[20726]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:32.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:32 vm00 bash[20726]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:32.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:32 vm03 bash[23394]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:32.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:32 vm03 bash[23394]: cluster 2026-03-10T14:49:30.715527+0000 mgr.y (mgr.14152) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:34 vm00 bash[28403]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:34 vm00 bash[28403]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:34 vm00 bash[20726]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:34 vm00 bash[20726]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:34 vm03 bash[23394]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:34 vm03 bash[23394]: cluster 2026-03-10T14:49:32.715821+0000 mgr.y (mgr.14152) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:36 vm00 bash[28403]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:36 vm00 bash[28403]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:36 vm00 bash[20726]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:36 vm00 bash[20726]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:36 vm03 bash[23394]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:36.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:36 vm03 bash[23394]: cluster 2026-03-10T14:49:34.716141+0000 mgr.y (mgr.14152) 154 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:38 vm03 bash[23394]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:38 vm03 bash[23394]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:38 vm00 bash[28403]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:38 vm00 bash[28403]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:38 vm00 bash[20726]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:38.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:38 vm00 bash[20726]: cluster 2026-03-10T14:49:36.716421+0000 mgr.y (mgr.14152) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:39.465 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 bash[23394]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.465 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 bash[23394]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.465 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 bash[23394]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:39.465 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 bash[23394]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:39.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:39 vm00 bash[28403]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:39 vm00 bash[28403]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:39 vm00 bash[28403]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:39 vm00 bash[28403]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:39 vm00 bash[20726]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:39 vm00 bash[20726]: audit 2026-03-10T14:49:38.934798+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:39 vm00 bash[20726]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:39.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:39 vm00 bash[20726]: audit 2026-03-10T14:49:38.935276+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:40.007 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:40.007 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:39 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:40.007 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:49:39 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:40.007 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:49:39 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:49:40.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:40 vm00 bash[28403]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:40 vm00 bash[20726]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: cluster 2026-03-10T14:49:38.716765+0000 mgr.y (mgr.14152) 156 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: cephadm 2026-03-10T14:49:38.935885+0000 mgr.y (mgr.14152) 157 : cephadm [INF] Deploying daemon osd.4 on vm03 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.029453+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.034766+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:40.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:40 vm03 bash[23394]: audit 2026-03-10T14:49:40.040424+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:41 vm03 bash[23394]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:41 vm03 bash[23394]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:41.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:41 vm00 bash[28403]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:41.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:41 vm00 bash[28403]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:41.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:41 vm00 bash[20726]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:41.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:41 vm00 bash[20726]: cluster 2026-03-10T14:49:40.717069+0000 mgr.y (mgr.14152) 158 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:43 vm03 bash[23394]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:43 vm00 bash[28403]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: cluster 2026-03-10T14:49:42.717328+0000 mgr.y (mgr.14152) 159 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: audit 2026-03-10T14:49:43.528624+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:43 vm00 bash[20726]: audit 2026-03-10T14:49:43.529486+0000 mon.a (mon.0) 466 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:44 vm03 bash[23394]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:44 vm00 bash[28403]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.789466+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.793401+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.103:6800/4249951776' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: cluster 2026-03-10T14:49:43.794263+0000 mon.a (mon.0) 468 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.794701+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:45.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:44 vm00 bash[20726]: audit 2026-03-10T14:49:43.794871+0000 mon.a (mon.0) 470 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:49:46.111 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.111 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.111 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.112 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:45 vm03 bash[23394]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:45 vm00 bash[20726]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:44.717585+0000 mgr.y (mgr.14152) 160 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:44.794322+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:44.803046+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:44.809663+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: cluster 2026-03-10T14:49:45.800819+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:45.800933+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:46.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:45 vm00 bash[28403]: audit 2026-03-10T14:49:45.806790+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.145 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.146 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:46 vm03 bash[23394]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:46 vm00 bash[20726]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:44.517145+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:44.517226+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:45.890593+0000 mon.a (mon.0) 477 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.127387+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.133432+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.532061+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.532725+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.538345+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:46.808573+0000 mon.a (mon.0) 483 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: cluster 2026-03-10T14:49:46.808732+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.219 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:46 vm00 bash[28403]: audit 2026-03-10T14:49:46.808890+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:49:47.219 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 4 on host 'vm03' 2026-03-10T14:49:47.310 DEBUG:teuthology.orchestra.run.vm03:osd.4> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.4.service 2026-03-10T14:49:47.311 INFO:tasks.cephadm:Deploying osd.5 on vm03 with /dev/vdd... 2026-03-10T14:49:47.311 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdd 2026-03-10T14:49:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:47 vm00 bash[20726]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:48.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:47 vm00 bash[28403]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: cluster 2026-03-10T14:49:46.717925+0000 mgr.y (mgr.14152) 161 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.200375+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.208686+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: audit 2026-03-10T14:49:47.214826+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:47 vm03 bash[23394]: cluster 2026-03-10T14:49:47.810766+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T14:49:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:49 vm03 bash[23394]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:49 vm03 bash[23394]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:49 vm03 bash[23394]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:49 vm03 bash[23394]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:49 vm00 bash[20726]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:49 vm00 bash[20726]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:49 vm00 bash[20726]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:49 vm00 bash[20726]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:49 vm00 bash[28403]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:49 vm00 bash[28403]: cluster 2026-03-10T14:49:48.718243+0000 mgr.y (mgr.14152) 162 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:50.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:49 vm00 bash[28403]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:50.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:49 vm00 bash[28403]: cluster 2026-03-10T14:49:48.841743+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-10T14:49:51.973 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:49:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:51 vm03 bash[23394]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:51 vm03 bash[23394]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:52.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:51 vm00 bash[20726]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:52.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:51 vm00 bash[20726]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:52.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:51 vm00 bash[28403]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:52.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:51 vm00 bash[28403]: cluster 2026-03-10T14:49:50.718532+0000 mgr.y (mgr.14152) 163 : cluster [DBG] pgmap v141: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:49:53.074 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:49:53.088 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:53 vm03 bash[23394]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:53 vm00 bash[28403]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: cluster 2026-03-10T14:49:52.718899+0000 mgr.y (mgr.14152) 164 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.820332+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.825026+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.826225+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.829735+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.830185+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:53 vm00 bash[20726]: audit 2026-03-10T14:49:53.834264+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:54 vm03 bash[23394]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:54 vm00 bash[20726]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.812860+0000 mgr.y (mgr.14152) 165 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.828897+0000 mgr.y (mgr.14152) 166 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:54 vm00 bash[28403]: cephadm 2026-03-10T14:49:53.829391+0000 mgr.y (mgr.14152) 167 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-10T14:49:56.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:55 vm00 bash[28403]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:56.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:55 vm00 bash[28403]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:55 vm00 bash[20726]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:55 vm00 bash[20726]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:55 vm03 bash[23394]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:55 vm03 bash[23394]: cluster 2026-03-10T14:49:54.719213+0000 mgr.y (mgr.14152) 168 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-10T14:49:57.737 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:49:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:57 vm03 bash[23394]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:57 vm03 bash[23394]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:57 vm00 bash[28403]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:57 vm00 bash[28403]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:57 vm00 bash[20726]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:57 vm00 bash[20726]: cluster 2026-03-10T14:49:56.719436+0000 mgr.y (mgr.14152) 169 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:58 vm00 bash[20726]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:59.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:58 vm00 bash[28403]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.018901+0000 mgr.y (mgr.14152) 170 : audit [DBG] from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.020635+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.022545+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:49:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:58 vm03 bash[23394]: audit 2026-03-10T14:49:58.023265+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:59 vm00 bash[20726]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:49:59 vm00 bash[20726]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:59 vm00 bash[28403]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:49:59 vm00 bash[28403]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:59 vm03 bash[23394]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:49:59 vm03 bash[23394]: cluster 2026-03-10T14:49:58.719702+0000 mgr.y (mgr.14152) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:50:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:00 vm00 bash[20726]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:00 vm00 bash[20726]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:01.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:00 vm00 bash[28403]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:01.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:00 vm00 bash[28403]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:00 vm03 bash[23394]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:00 vm03 bash[23394]: cluster 2026-03-10T14:50:00.000137+0000 mon.a (mon.0) 500 : cluster [INF] overall HEALTH_OK 2026-03-10T14:50:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:01 vm00 bash[20726]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:01 vm00 bash[20726]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:01 vm00 bash[28403]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:01 vm00 bash[28403]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:01 vm03 bash[23394]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:01 vm03 bash[23394]: cluster 2026-03-10T14:50:00.719988+0000 mgr.y (mgr.14152) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:03 vm00 bash[20726]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:04.218 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:03 vm00 bash[28403]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: cluster 2026-03-10T14:50:02.720337+0000 mgr.y (mgr.14152) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.479391+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.103:0/718376803' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.480787+0000 mon.a (mon.0) 501 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.484448+0000 mon.a (mon.0) 502 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb51bca8-ec91-4c05-94f6-3755aef22a35"}]': finished 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: cluster 2026-03-10T14:50:03.488178+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:03 vm03 bash[23394]: audit 2026-03-10T14:50:03.488339+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:05.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:04 vm00 bash[20726]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:05.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:04 vm00 bash[20726]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:05.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:04 vm00 bash[28403]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:05.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:04 vm00 bash[28403]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:04 vm03 bash[23394]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:04 vm03 bash[23394]: audit 2026-03-10T14:50:04.241378+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.103:0/947723855' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:06.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:05 vm00 bash[28403]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:06.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:05 vm00 bash[28403]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:05 vm00 bash[20726]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:05 vm00 bash[20726]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:05 vm03 bash[23394]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:05 vm03 bash[23394]: cluster 2026-03-10T14:50:04.720605+0000 mgr.y (mgr.14152) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:07 vm03 bash[23394]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:07 vm03 bash[23394]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:07 vm00 bash[28403]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:07 vm00 bash[28403]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:07 vm00 bash[20726]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:07 vm00 bash[20726]: cluster 2026-03-10T14:50:06.720876+0000 mgr.y (mgr.14152) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:09 vm00 bash[28403]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:09 vm00 bash[28403]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:09 vm00 bash[20726]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:09 vm00 bash[20726]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:09 vm03 bash[23394]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:09 vm03 bash[23394]: cluster 2026-03-10T14:50:08.721171+0000 mgr.y (mgr.14152) 176 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:11 vm00 bash[28403]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:11 vm00 bash[28403]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:11 vm00 bash[20726]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:11 vm00 bash[20726]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:11 vm03 bash[23394]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:11 vm03 bash[23394]: cluster 2026-03-10T14:50:10.721462+0000 mgr.y (mgr.14152) 177 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 bash[23394]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:14.285 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:14.285 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:14 vm00 bash[28403]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: cluster 2026-03-10T14:50:12.721742+0000 mgr.y (mgr.14152) 178 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: audit 2026-03-10T14:50:13.006003+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: audit 2026-03-10T14:50:13.006483+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:14 vm00 bash[20726]: cephadm 2026-03-10T14:50:13.006862+0000 mgr.y (mgr.14152) 179 : cephadm [INF] Deploying daemon osd.5 on vm03 2026-03-10T14:50:14.626 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:14.626 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:14.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:14 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:15 vm03 bash[23394]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:15 vm00 bash[28403]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.551444+0000 mon.a (mon.0) 507 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.558675+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:15.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:15 vm00 bash[20726]: audit 2026-03-10T14:50:14.563752+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:16 vm03 bash[23394]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:16 vm03 bash[23394]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:16.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:16 vm00 bash[28403]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:16.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:16 vm00 bash[28403]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:16 vm00 bash[20726]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:16 vm00 bash[20726]: cluster 2026-03-10T14:50:14.721990+0000 mgr.y (mgr.14152) 180 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:18 vm03 bash[23394]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:18 vm03 bash[23394]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:18 vm00 bash[28403]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:18 vm00 bash[28403]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:18 vm00 bash[20726]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:18.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:18 vm00 bash[20726]: cluster 2026-03-10T14:50:16.722348+0000 mgr.y (mgr.14152) 181 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:19 vm03 bash[23394]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:19 vm03 bash[23394]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:19 vm03 bash[23394]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:19 vm03 bash[23394]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:19 vm00 bash[28403]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:19 vm00 bash[28403]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:19 vm00 bash[28403]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:19 vm00 bash[28403]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:19 vm00 bash[20726]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:19 vm00 bash[20726]: audit 2026-03-10T14:50:18.134343+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:19 vm00 bash[20726]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:19.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:19 vm00 bash[20726]: audit 2026-03-10T14:50:18.135649+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:20 vm03 bash[23394]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:20 vm00 bash[28403]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: cluster 2026-03-10T14:50:18.722750+0000 mgr.y (mgr.14152) 182 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.054292+0000 mon.a (mon.0) 511 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: cluster 2026-03-10T14:50:19.057314+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.057453+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.103:6804/413751251' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.057470+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:20.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:20 vm00 bash[20726]: audit 2026-03-10T14:50:19.058626+0000 mon.a (mon.0) 514 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.117 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:21 vm03 bash[23394]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:21 vm00 bash[28403]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.061469+0000 mon.a (mon.0) 515 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: cluster 2026-03-10T14:50:20.066537+0000 mon.a (mon.0) 516 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.067238+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.077261+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.806800+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.820728+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.821679+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.822281+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:20.839129+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:21 vm00 bash[20726]: audit 2026-03-10T14:50:21.069962+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:21.919 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 5 on host 'vm03' 2026-03-10T14:50:21.988 DEBUG:teuthology.orchestra.run.vm03:osd.5> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.5.service 2026-03-10T14:50:21.989 INFO:tasks.cephadm:Deploying osd.6 on vm03 with /dev/vdc... 2026-03-10T14:50:21.989 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdc 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:22 vm03 bash[23394]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:22 vm00 bash[28403]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:19.132033+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:19.132078+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:20.723036+0000 mgr.y (mgr.14152) 183 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: cluster 2026-03-10T14:50:21.098154+0000 mon.a (mon.0) 525 : cluster [DBG] osdmap e38: 6 total, 5 up, 6 in 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.098266+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.114106+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.905877+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.912218+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:21.917059+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:22.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:22 vm00 bash[20726]: audit 2026-03-10T14:50:22.070012+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:23 vm03 bash[23394]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:23 vm00 bash[28403]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: cluster 2026-03-10T14:50:22.112491+0000 mon.a (mon.0) 532 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: cluster 2026-03-10T14:50:22.112673+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:23 vm00 bash[20726]: audit 2026-03-10T14:50:22.112954+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:24.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:24 vm03 bash[23394]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:24 vm00 bash[28403]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:22.723340+0000 mgr.y (mgr.14152) 184 : cluster [DBG] pgmap v162: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:23.105162+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:24 vm00 bash[20726]: cluster 2026-03-10T14:50:23.118213+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-10T14:50:25.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:25 vm00 bash[28403]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:25.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:25 vm00 bash[28403]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:25.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:25 vm00 bash[20726]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:25.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:25 vm00 bash[20726]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:25 vm03 bash[23394]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:25 vm03 bash[23394]: cluster 2026-03-10T14:50:24.117955+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e41: 6 total, 6 up, 6 in 2026-03-10T14:50:26.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:26 vm00 bash[28403]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:26 vm00 bash[28403]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:26 vm00 bash[20726]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:26 vm00 bash[20726]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:26 vm03 bash[23394]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:26 vm03 bash[23394]: cluster 2026-03-10T14:50:24.723598+0000 mgr.y (mgr.14152) 185 : cluster [DBG] pgmap v165: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:26.660 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:50:27.575 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:50:27.594 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm03:/dev/vdc 2026-03-10T14:50:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:28 vm00 bash[28403]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:28 vm00 bash[28403]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:28.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:28 vm00 bash[20726]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:28.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:28 vm00 bash[20726]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:28 vm03 bash[23394]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:28.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:28 vm03 bash[23394]: cluster 2026-03-10T14:50:26.723852+0000 mgr.y (mgr.14152) 186 : cluster [DBG] pgmap v166: 1 pgs: 1 remapped+peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:29 vm00 bash[28403]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:29.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:29 vm00 bash[20726]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.402478+0000 mgr.y (mgr.14152) 187 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.408882+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.414240+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.415305+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.415724+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.416230+0000 mgr.y (mgr.14152) 188 : cephadm [INF] Adjusting osd_memory_target on vm03 to 227.8M 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cephadm 2026-03-10T14:50:28.416732+0000 mgr.y (mgr.14152) 189 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.417039+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.417518+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: audit 2026-03-10T14:50:28.422826+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:29 vm03 bash[23394]: cluster 2026-03-10T14:50:28.724130+0000 mgr.y (mgr.14152) 190 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:30 vm00 bash[28403]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:30 vm00 bash[28403]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:30 vm00 bash[28403]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:30 vm00 bash[28403]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:30 vm00 bash[20726]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:30 vm00 bash[20726]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:30 vm00 bash[20726]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:30 vm00 bash[20726]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:30 vm03 bash[23394]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:30 vm03 bash[23394]: cluster 2026-03-10T14:50:29.420643+0000 mon.a (mon.0) 545 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T14:50:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:30 vm03 bash[23394]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:30 vm03 bash[23394]: cluster 2026-03-10T14:50:29.420671+0000 mon.a (mon.0) 546 : cluster [INF] Cluster is now healthy 2026-03-10T14:50:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:31 vm03 bash[23394]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:31 vm03 bash[23394]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:31.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:31 vm00 bash[28403]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:31.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:31 vm00 bash[28403]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:31.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:31 vm00 bash[20726]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:31.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:31 vm00 bash[20726]: cluster 2026-03-10T14:50:30.724374+0000 mgr.y (mgr.14152) 191 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:50:32.238 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:32 vm03 bash[23394]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:32 vm00 bash[28403]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.525642+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.526851+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:32 vm00 bash[20726]: audit 2026-03-10T14:50:32.527201+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:33 vm03 bash[23394]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:33 vm03 bash[23394]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:33 vm03 bash[23394]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:33 vm03 bash[23394]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:33 vm00 bash[28403]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:33 vm00 bash[28403]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:33 vm00 bash[28403]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:33 vm00 bash[28403]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:33 vm00 bash[20726]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:33 vm00 bash[20726]: audit 2026-03-10T14:50:32.524454+0000 mgr.y (mgr.14152) 192 : audit [DBG] from='client.24232 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:33 vm00 bash[20726]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:33 vm00 bash[20726]: cluster 2026-03-10T14:50:32.724620+0000 mgr.y (mgr.14152) 193 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:35 vm03 bash[23394]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:35 vm03 bash[23394]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:35 vm00 bash[28403]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:35 vm00 bash[28403]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:35 vm00 bash[20726]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:36.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:35 vm00 bash[20726]: cluster 2026-03-10T14:50:34.724840+0000 mgr.y (mgr.14152) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:37 vm03 bash[23394]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:37 vm03 bash[23394]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:37 vm00 bash[28403]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:37 vm00 bash[28403]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:37 vm00 bash[20726]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:38.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:37 vm00 bash[20726]: cluster 2026-03-10T14:50:36.725123+0000 mgr.y (mgr.14152) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:38 vm03 bash[23394]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:38 vm00 bash[28403]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.899920+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.103:0/3622869395' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.901667+0000 mon.a (mon.0) 550 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.904998+0000 mon.a (mon.0) 551 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d5d7abd1-1279-4f32-bce7-89f79446b2d1"}]': finished 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: cluster 2026-03-10T14:50:37.908925+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:37.909232+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:38 vm00 bash[20726]: audit 2026-03-10T14:50:38.538760+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.103:0/2589983105' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:50:40.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:39 vm00 bash[28403]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:40.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:39 vm00 bash[28403]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:40.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:39 vm00 bash[20726]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:40.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:39 vm00 bash[20726]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:39 vm03 bash[23394]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:39 vm03 bash[23394]: cluster 2026-03-10T14:50:38.725375+0000 mgr.y (mgr.14152) 196 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:41 vm00 bash[28403]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:41 vm00 bash[28403]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:41 vm00 bash[20726]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:41 vm00 bash[20726]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:41 vm03 bash[23394]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:42.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:41 vm03 bash[23394]: cluster 2026-03-10T14:50:40.725608+0000 mgr.y (mgr.14152) 197 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:43 vm00 bash[28403]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:43 vm00 bash[28403]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:43 vm00 bash[20726]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:43 vm00 bash[20726]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:43 vm03 bash[23394]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:43 vm03 bash[23394]: cluster 2026-03-10T14:50:42.725841+0000 mgr.y (mgr.14152) 198 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:45 vm00 bash[28403]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:45 vm00 bash[28403]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:45 vm00 bash[20726]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:45 vm00 bash[20726]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.343 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:45 vm03 bash[23394]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:46.343 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:45 vm03 bash[23394]: cluster 2026-03-10T14:50:44.726086+0000 mgr.y (mgr.14152) 199 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:47.059 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:46 vm03 bash[23394]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.059 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:46 vm03 bash[23394]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.059 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:46 vm03 bash[23394]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.059 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:46 vm03 bash[23394]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:46 vm00 bash[28403]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:46 vm00 bash[28403]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:46 vm00 bash[28403]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:46 vm00 bash[28403]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:46 vm00 bash[20726]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:46 vm00 bash[20726]: audit 2026-03-10T14:50:46.767705+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:46 vm00 bash[20726]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:46 vm00 bash[20726]: audit 2026-03-10T14:50:46.768264+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:47.895 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:50:47 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:47.895 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:50:47 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:47.895 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:50:47 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:47.896 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:47 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:47 vm00 bash[28403]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:47 vm00 bash[28403]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:47 vm00 bash[28403]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:47 vm00 bash[28403]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:47 vm00 bash[20726]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:47 vm00 bash[20726]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:47 vm00 bash[20726]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:47 vm00 bash[20726]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.248 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:50:48 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:48.249 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:50:48 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:48.253 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:50:48 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:48.253 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:47 vm03 bash[23394]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.253 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:47 vm03 bash[23394]: cluster 2026-03-10T14:50:46.726328+0000 mgr.y (mgr.14152) 200 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:48.253 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:47 vm03 bash[23394]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.253 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:47 vm03 bash[23394]: cephadm 2026-03-10T14:50:46.768739+0000 mgr.y (mgr.14152) 201 : cephadm [INF] Deploying daemon osd.6 on vm03 2026-03-10T14:50:48.253 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.350 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:48 vm03 bash[23394]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:48 vm00 bash[28403]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.170304+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.176776+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:48 vm00 bash[20726]: audit 2026-03-10T14:50:48.183642+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:50.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:50 vm00 bash[28403]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:50.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:50 vm00 bash[28403]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:50.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:50 vm00 bash[20726]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:50.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:50 vm00 bash[20726]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:50.605 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:50 vm03 bash[23394]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:50.605 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:50 vm03 bash[23394]: cluster 2026-03-10T14:50:48.726608+0000 mgr.y (mgr.14152) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:52 vm00 bash[28403]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:52 vm00 bash[20726]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: cluster 2026-03-10T14:50:50.726949+0000 mgr.y (mgr.14152) 203 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: audit 2026-03-10T14:50:51.567487+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:52 vm03 bash[23394]: audit 2026-03-10T14:50:51.569292+0000 mon.a (mon.0) 559 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:53 vm03 bash[23394]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:53 vm00 bash[28403]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.204479+0000 mon.a (mon.0) 560 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: cluster 2026-03-10T14:50:52.207093+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.208739+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.215586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.103:6808/2099210513' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:53.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:53 vm00 bash[20726]: audit 2026-03-10T14:50:52.217076+0000 mon.a (mon.0) 563 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:54 vm03 bash[23394]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:54 vm00 bash[28403]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:52.727217+0000 mgr.y (mgr.14152) 204 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.208480+0000 mon.a (mon.0) 564 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:53.214336+0000 mon.a (mon.0) 565 : cluster [DBG] osdmap e44: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.216030+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.229568+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: cluster 2026-03-10T14:50:53.401865+0000 mon.a (mon.0) 568 : cluster [DBG] osdmap e45: 7 total, 6 up, 7 in 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:54 vm00 bash[20726]: audit 2026-03-10T14:50:53.403412+0000 mon.a (mon.0) 569 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:55 vm03 bash[23394]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:55 vm00 bash[28403]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:52.571451+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:52.571515+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.226195+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: cluster 2026-03-10T14:50:54.408248+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e46: 7 total, 6 up, 7 in 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.408457+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.647088+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.690924+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:54.774453+0000 mon.a (mon.0) 575 : audit [INF] from='osd.6 ' entity='osd.6' 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.091881+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.092641+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:50:55.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.099127+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:55.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:55 vm00 bash[20726]: audit 2026-03-10T14:50:55.226427+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:55.830 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 6 on host 'vm03' 2026-03-10T14:50:55.942 DEBUG:teuthology.orchestra.run.vm03:osd.6> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.6.service 2026-03-10T14:50:55.943 INFO:tasks.cephadm:Deploying osd.7 on vm03 with /dev/vdb... 2026-03-10T14:50:55.943 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- lvm zap /dev/vdb 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:56 vm03 bash[23394]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:56 vm00 bash[20726]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:54.727461+0000 mgr.y (mgr.14152) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:55.782088+0000 mon.a (mon.0) 580 : cluster [INF] osd.6 v2:192.168.123.103:6808/2099210513 boot 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: cluster 2026-03-10T14:50:55.782492+0000 mon.a (mon.0) 581 : cluster [DBG] osdmap e47: 7 total, 7 up, 7 in 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.782993+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.814830+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.821013+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:56.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:56 vm00 bash[28403]: audit 2026-03-10T14:50:55.829340+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:50:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:57 vm03 bash[23394]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:57 vm03 bash[23394]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:57 vm03 bash[23394]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:57 vm03 bash[23394]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:57 vm00 bash[20726]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:57 vm00 bash[20726]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:57 vm00 bash[20726]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:57 vm00 bash[20726]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:57 vm00 bash[28403]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:57 vm00 bash[28403]: cluster 2026-03-10T14:50:56.727707+0000 mgr.y (mgr.14152) 206 : cluster [DBG] pgmap v187: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:57 vm00 bash[28403]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:58.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:57 vm00 bash[28403]: cluster 2026-03-10T14:50:56.784897+0000 mon.a (mon.0) 586 : cluster [DBG] osdmap e48: 7 total, 7 up, 7 in 2026-03-10T14:50:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:58 vm03 bash[23394]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:50:59.139 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:58 vm03 bash[23394]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:50:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:58 vm00 bash[20726]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:50:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:58 vm00 bash[20726]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:50:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:58 vm00 bash[28403]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:50:59.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:58 vm00 bash[28403]: cluster 2026-03-10T14:50:57.789448+0000 mon.a (mon.0) 587 : cluster [DBG] osdmap e49: 7 total, 7 up, 7 in 2026-03-10T14:51:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:59 vm03 bash[23394]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:50:59 vm03 bash[23394]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:59 vm00 bash[20726]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:50:59 vm00 bash[20726]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:59 vm00 bash[28403]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:50:59 vm00 bash[28403]: cluster 2026-03-10T14:50:58.728202+0000 mgr.y (mgr.14152) 207 : cluster [DBG] pgmap v190: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:00.634 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:01 vm03 bash[23394]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:01 vm00 bash[20726]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: cluster 2026-03-10T14:51:00.728489+0000 mgr.y (mgr.14152) 208 : cluster [DBG] pgmap v191: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.622482+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.626934+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.627857+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.628374+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.628785+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.629741+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.630200+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:01 vm00 bash[28403]: audit 2026-03-10T14:51:01.633657+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:02.390 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:51:02.414 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch daemon add osd vm03:/dev/vdb 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:02.825 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:02 vm03 bash[23394]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:02 vm00 bash[20726]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.616124+0000 mgr.y (mgr.14152) 209 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.629085+0000 mgr.y (mgr.14152) 210 : cephadm [INF] Adjusting osd_memory_target on vm03 to 151.9M 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:03.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:02 vm00 bash[28403]: cephadm 2026-03-10T14:51:01.629454+0000 mgr.y (mgr.14152) 211 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-10T14:51:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:03 vm03 bash[23394]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:03 vm03 bash[23394]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:03 vm00 bash[20726]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:03 vm00 bash[20726]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:04.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:03 vm00 bash[28403]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:04.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:03 vm00 bash[28403]: cluster 2026-03-10T14:51:02.728809+0000 mgr.y (mgr.14152) 212 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:05 vm03 bash[23394]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:05 vm03 bash[23394]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:06.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:05 vm00 bash[28403]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:06.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:05 vm00 bash[28403]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:05 vm00 bash[20726]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:05 vm00 bash[20726]: cluster 2026-03-10T14:51:04.729147+0000 mgr.y (mgr.14152) 213 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T14:51:07.048 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:07 vm00 bash[20726]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:08.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:07 vm00 bash[28403]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: cluster 2026-03-10T14:51:06.729436+0000 mgr.y (mgr.14152) 214 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.554409+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.556116+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:07 vm03 bash[23394]: audit 2026-03-10T14:51:07.556586+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:08 vm03 bash[23394]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:08 vm03 bash[23394]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:08 vm00 bash[28403]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:09.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:08 vm00 bash[28403]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:09.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:08 vm00 bash[20726]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:09.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:08 vm00 bash[20726]: audit 2026-03-10T14:51:07.553033+0000 mgr.y (mgr.14152) 215 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:09 vm03 bash[23394]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:09 vm03 bash[23394]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:10.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:09 vm00 bash[28403]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:10.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:09 vm00 bash[28403]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:10.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:09 vm00 bash[20726]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:10.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:09 vm00 bash[20726]: cluster 2026-03-10T14:51:08.729735+0000 mgr.y (mgr.14152) 216 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 41 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:12 vm03 bash[23394]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:12 vm03 bash[23394]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:12 vm00 bash[28403]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:12 vm00 bash[28403]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:12 vm00 bash[20726]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:12 vm00 bash[20726]: cluster 2026-03-10T14:51:10.730043+0000 mgr.y (mgr.14152) 217 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.372 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.373 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.373 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:13.373 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:13 vm03 bash[23394]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:13 vm00 bash[28403]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.986502+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.103:0/1130655064' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.988488+0000 mon.a (mon.0) 599 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.992244+0000 mon.a (mon.0) 600 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d982354a-c92b-452c-a8e1-997104ffd93b"}]': finished 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: cluster 2026-03-10T14:51:12.996103+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:13 vm00 bash[20726]: audit 2026-03-10T14:51:12.998369+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:14 vm03 bash[23394]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:14 vm03 bash[23394]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:14 vm03 bash[23394]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:14 vm03 bash[23394]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:14.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:14 vm00 bash[28403]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:14 vm00 bash[28403]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:14 vm00 bash[28403]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:14 vm00 bash[28403]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:14 vm00 bash[20726]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:14 vm00 bash[20726]: cluster 2026-03-10T14:51:12.730310+0000 mgr.y (mgr.14152) 218 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:14 vm00 bash[20726]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:14.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:14 vm00 bash[20726]: audit 2026-03-10T14:51:13.626388+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.103:0/2407308623' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:51:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:16 vm03 bash[23394]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:16 vm03 bash[23394]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:16.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:16 vm00 bash[28403]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:16.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:16 vm00 bash[28403]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:16 vm00 bash[20726]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:16 vm00 bash[20726]: cluster 2026-03-10T14:51:14.730564+0000 mgr.y (mgr.14152) 219 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:18 vm00 bash[28403]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:18 vm00 bash[28403]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:18 vm00 bash[20726]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:18 vm00 bash[20726]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:18 vm03 bash[23394]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:18 vm03 bash[23394]: cluster 2026-03-10T14:51:16.730911+0000 mgr.y (mgr.14152) 220 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:20 vm03 bash[23394]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:20 vm03 bash[23394]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:20 vm00 bash[28403]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:20 vm00 bash[28403]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:20 vm00 bash[20726]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:20.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:20 vm00 bash[20726]: cluster 2026-03-10T14:51:18.731184+0000 mgr.y (mgr.14152) 221 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:21 vm03 bash[23394]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.624 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:21 vm03 bash[23394]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:21 vm00 bash[28403]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:21 vm00 bash[28403]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:21 vm00 bash[20726]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:21.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:21 vm00 bash[20726]: cluster 2026-03-10T14:51:20.731435+0000 mgr.y (mgr.14152) 222 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:22.450 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.451 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.451 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.451 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.451 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:22.451 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:22 vm03 bash[23394]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:22.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:22 vm00 bash[28403]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: audit 2026-03-10T14:51:21.909490+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: audit 2026-03-10T14:51:21.910135+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:22 vm00 bash[20726]: cephadm 2026-03-10T14:51:21.910593+0000 mgr.y (mgr.14152) 223 : cephadm [INF] Deploying daemon osd.7 on vm03 2026-03-10T14:51:23.165 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.165 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.165 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.165 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.166 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.607 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.608 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.608 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.608 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.608 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:51:23 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:23 vm03 bash[23394]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:23 vm03 bash[23394]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:23.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:23 vm00 bash[28403]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:23.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:23 vm00 bash[28403]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:23.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:23 vm00 bash[20726]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:23.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:23 vm00 bash[20726]: cluster 2026-03-10T14:51:22.731703+0000 mgr.y (mgr.14152) 224 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:24 vm03 bash[23394]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:24 vm00 bash[28403]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.604114+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.625559+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:24 vm00 bash[20726]: audit 2026-03-10T14:51:23.632466+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:25 vm03 bash[23394]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:25 vm03 bash[23394]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:25.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:25 vm00 bash[28403]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:25.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:25 vm00 bash[28403]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:25.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:25 vm00 bash[20726]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:25.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:25 vm00 bash[20726]: cluster 2026-03-10T14:51:24.731969+0000 mgr.y (mgr.14152) 225 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:27 vm03 bash[23394]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:27 vm00 bash[20726]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:27 vm00 bash[20726]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:27 vm00 bash[20726]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:27 vm00 bash[20726]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:27 vm00 bash[20726]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:28 vm00 bash[20726]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: cluster 2026-03-10T14:51:26.732264+0000 mgr.y (mgr.14152) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: audit 2026-03-10T14:51:27.808259+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:28.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:27 vm00 bash[28403]: audit 2026-03-10T14:51:27.810072+0000 mon.a (mon.0) 608 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:29 vm03 bash[23394]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:29 vm00 bash[20726]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.007334+0000 mon.a (mon.0) 609 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.009238+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.103:6812/1578983727' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: cluster 2026-03-10T14:51:28.011715+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e51: 8 total, 7 up, 8 in 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.012277+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:29.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:29 vm00 bash[28403]: audit 2026-03-10T14:51:28.012595+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:30 vm03 bash[23394]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:30 vm00 bash[20726]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: cluster 2026-03-10T14:51:28.732579+0000 mgr.y (mgr.14152) 227 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.027965+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: cluster 2026-03-10T14:51:29.032763+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.033752+0000 mon.a (mon.0) 615 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:29.063068+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:30.011955+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:30.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:30 vm00 bash[28403]: audit 2026-03-10T14:51:30.032161+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.162 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 7 on host 'vm03' 2026-03-10T14:51:31.283 DEBUG:teuthology.orchestra.run.vm03:osd.7> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.7.service 2026-03-10T14:51:31.284 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T14:51:31.284 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd stat -f json 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:31 vm03 bash[23394]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:31 vm00 bash[20726]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:28.759734+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:28.759782+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.039260+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.039966+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.040219+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: cluster 2026-03-10T14:51:30.056123+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.062992+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.098283+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:30.412608+0000 mon.a (mon.0) 625 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:31.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:31 vm00 bash[28403]: audit 2026-03-10T14:51:31.047463+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:32 vm03 bash[23394]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:32 vm00 bash[20726]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:30.732845+0000 mgr.y (mgr.14152) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:31.100155+0000 mon.a (mon.0) 627 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:31.100532+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.103285+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.146790+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.151585+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: audit 2026-03-10T14:51:31.157314+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:32.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:32 vm00 bash[28403]: cluster 2026-03-10T14:51:32.103145+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T14:51:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:34 vm03 bash[23394]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:34 vm03 bash[23394]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:34 vm03 bash[23394]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:34 vm03 bash[23394]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:34 vm00 bash[28403]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:34 vm00 bash[28403]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:34 vm00 bash[28403]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:34 vm00 bash[28403]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:34 vm00 bash[20726]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:34 vm00 bash[20726]: cluster 2026-03-10T14:51:32.733122+0000 mgr.y (mgr.14152) 229 : cluster [DBG] pgmap v213: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:34 vm00 bash[20726]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:34.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:34 vm00 bash[20726]: cluster 2026-03-10T14:51:33.123211+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T14:51:35.922 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:51:36.191 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:51:36.200 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:36 vm00 bash[28403]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:36.200 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:36 vm00 bash[28403]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:36.200 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:36 vm00 bash[20726]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:36.200 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:36 vm00 bash[20726]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:36.245 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":56,"num_osds":8,"num_up_osds":8,"osd_up_since":1773154291,"num_in_osds":8,"osd_in_since":1773154272,"num_remapped_pgs":0} 2026-03-10T14:51:36.245 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd dump --format=json 2026-03-10T14:51:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:36 vm03 bash[23394]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:36 vm03 bash[23394]: cluster 2026-03-10T14:51:34.733404+0000 mgr.y (mgr.14152) 230 : cluster [DBG] pgmap v215: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:37 vm00 bash[20726]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:37 vm00 bash[28403]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.192351+0000 mon.a (mon.0) 635 : audit [DBG] from='client.? 192.168.123.100:0/1343530052' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.872184+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.879335+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.881612+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.882220+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.882972+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.883512+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.884910+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.885530+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:37 vm03 bash[23394]: audit 2026-03-10T14:51:36.889793+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:38 vm03 bash[23394]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:38 vm00 bash[20726]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cluster 2026-03-10T14:51:36.733750+0000 mgr.y (mgr.14152) 231 : cluster [DBG] pgmap v216: 1 pgs: 1 remapped+peering; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.865262+0000 mgr.y (mgr.14152) 232 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T14:51:38.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.883916+0000 mgr.y (mgr.14152) 233 : cephadm [INF] Adjusting osd_memory_target on vm03 to 113.9M 2026-03-10T14:51:38.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:38.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:38 vm00 bash[28403]: cephadm 2026-03-10T14:51:36.884431+0000 mgr.y (mgr.14152) 234 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-10T14:51:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:39 vm03 bash[23394]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:39 vm03 bash[23394]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:39 vm00 bash[20726]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:39 vm00 bash[20726]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:39 vm00 bash[28403]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:39 vm00 bash[28403]: cluster 2026-03-10T14:51:38.734029+0000 mgr.y (mgr.14152) 235 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:39.946 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:51:40.222 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:51:40.222 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":56,"fsid":"93bd26bc-1c8f-11f1-8404-610ce866bde7","created":"2026-03-10T14:45:32.261671+0000","modified":"2026-03-10T14:51:33.110635+0000","last_up_change":"2026-03-10T14:51:31.082469+0000","last_in_change":"2026-03-10T14:51:12.988931+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:48:38.744817+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"c1ba9a14-6c50-4bf4-bfa2-935d1c099357","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":54,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1492812989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1492812989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1492812989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1492812989}]},"public_addr":"192.168.123.100:6801/1492812989","cluster_addr":"192.168.123.100:6802/1492812989","heartbeat_back_addr":"192.168.123.100:6804/1492812989","heartbeat_front_addr":"192.168.123.100:6803/1492812989","state":["exists","up"]},{"osd":1,"uuid":"d926117c-9bf7-44cb-8796-78132bdc13d6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":33,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":198852601}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":198852601}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":198852601}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":198852601}]},"public_addr":"192.168.123.100:6805/198852601","cluster_addr":"192.168.123.100:6806/198852601","heartbeat_back_addr":"192.168.123.100:6808/198852601","heartbeat_front_addr":"192.168.123.100:6807/198852601","state":["exists","up"]},{"osd":2,"uuid":"2ef814fa-4e2d-4d38-94de-a33c6dc06fe1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4087124508}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4087124508}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4087124508}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4087124508}]},"public_addr":"192.168.123.100:6809/4087124508","cluster_addr":"192.168.123.100:6810/4087124508","heartbeat_back_addr":"192.168.123.100:6812/4087124508","heartbeat_front_addr":"192.168.123.100:6811/4087124508","state":["exists","up"]},{"osd":3,"uuid":"536f0633-b026-45b8-8c47-eb23cccf9b64","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":1912373457}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1912373457}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1912373457}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":1912373457}]},"public_addr":"192.168.123.100:6813/1912373457","cluster_addr":"192.168.123.100:6814/1912373457","heartbeat_back_addr":"192.168.123.100:6816/1912373457","heartbeat_front_addr":"192.168.123.100:6815/1912373457","state":["exists","up"]},{"osd":4,"uuid":"d4924339-f850-475e-9859-ad7c6a3d2123","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":4249951776}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":4249951776}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":4249951776}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":4249951776}]},"public_addr":"192.168.123.103:6800/4249951776","cluster_addr":"192.168.123.103:6801/4249951776","heartbeat_back_addr":"192.168.123.103:6803/4249951776","heartbeat_front_addr":"192.168.123.103:6802/4249951776","state":["exists","up"]},{"osd":5,"uuid":"bb51bca8-ec91-4c05-94f6-3755aef22a35","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":39,"up_thru":40,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":413751251}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":413751251}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":413751251}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":413751251}]},"public_addr":"192.168.123.103:6804/413751251","cluster_addr":"192.168.123.103:6805/413751251","heartbeat_back_addr":"192.168.123.103:6807/413751251","heartbeat_front_addr":"192.168.123.103:6806/413751251","state":["exists","up"]},{"osd":6,"uuid":"d5d7abd1-1279-4f32-bce7-89f79446b2d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":2099210513}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":2099210513}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":2099210513}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2099210513}]},"public_addr":"192.168.123.103:6808/2099210513","cluster_addr":"192.168.123.103:6809/2099210513","heartbeat_back_addr":"192.168.123.103:6811/2099210513","heartbeat_front_addr":"192.168.123.103:6810/2099210513","state":["exists","up"]},{"osd":7,"uuid":"d982354a-c92b-452c-a8e1-997104ffd93b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":54,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1578983727}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":1578983727}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":1578983727}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1578983727}]},"public_addr":"192.168.123.103:6812/1578983727","cluster_addr":"192.168.123.103:6813/1578983727","heartbeat_back_addr":"192.168.123.103:6815/1578983727","heartbeat_front_addr":"192.168.123.103:6814/1578983727","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:27.007815+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:57.703056+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:48:34.391610+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:09.732879+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:44.517228+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:19.132079+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:52.571517+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:51:28.759784+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/3861326831":"2026-03-11T14:45:56.679254+0000","192.168.123.100:6800/1843604239":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/3470354970":"2026-03-11T14:45:56.679254+0000","192.168.123.100:6800/2111612896":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/3348435210":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/1278955968":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/2823338908":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/1588560963":"2026-03-11T14:45:45.658909+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:51:40.277 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T14:48:38.744817+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T14:51:40.278 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd pool get .mgr pg_num 2026-03-10T14:51:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:40 vm03 bash[23394]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:40 vm03 bash[23394]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:40.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:40 vm00 bash[28403]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:40.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:40 vm00 bash[28403]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:40.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:40 vm00 bash[20726]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:40.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:40 vm00 bash[20726]: audit 2026-03-10T14:51:40.222731+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.100:0/3660996050' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:51:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:41 vm03 bash[23394]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:41 vm03 bash[23394]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:41.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:41 vm00 bash[28403]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:41.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:41 vm00 bash[28403]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:41.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:41 vm00 bash[20726]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:41.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:41 vm00 bash[20726]: cluster 2026-03-10T14:51:40.734278+0000 mgr.y (mgr.14152) 236 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T14:51:43.967 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:51:44.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:43 vm03 bash[23394]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:43 vm03 bash[23394]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:43 vm00 bash[28403]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:43 vm00 bash[28403]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:43 vm00 bash[20726]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:43 vm00 bash[20726]: cluster 2026-03-10T14:51:42.734537+0000 mgr.y (mgr.14152) 237 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:51:44.220 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-10T14:51:44.282 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm00 2026-03-10T14:51:44.282 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply rgw foo.a --placement '1;vm00=foo.a' 2026-03-10T14:51:45.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:44 vm00 bash[28403]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:45.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:44 vm00 bash[28403]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:44 vm00 bash[20726]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:44 vm00 bash[20726]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:44 vm03 bash[23394]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:44 vm03 bash[23394]: audit 2026-03-10T14:51:44.221746+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.100:0/3600521626' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:51:46.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:45 vm00 bash[28403]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:46.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:45 vm00 bash[28403]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:46.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:45 vm00 bash[20726]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:46.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:45 vm00 bash[20726]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:46.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:45 vm03 bash[23394]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:46.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:45 vm03 bash[23394]: cluster 2026-03-10T14:51:44.734903+0000 mgr.y (mgr.14152) 238 : cluster [DBG] pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:47 vm00 bash[28403]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:47 vm00 bash[28403]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:47 vm00 bash[20726]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:47 vm00 bash[20726]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:47 vm03 bash[23394]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:47 vm03 bash[23394]: cluster 2026-03-10T14:51:46.735221+0000 mgr.y (mgr.14152) 239 : cluster [DBG] pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:48.916 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:51:49.207 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled rgw.foo.a update... 2026-03-10T14:51:49.604 DEBUG:teuthology.orchestra.run.vm00:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@rgw.foo.a.service 2026-03-10T14:51:49.605 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm03 2026-03-10T14:51:49.606 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd pool create datapool 3 3 replicated 2026-03-10T14:51:49.904 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:49 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.209 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.210 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:50 vm03 bash[23394]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 bash[20726]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.422 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: cluster 2026-03-10T14:51:48.735539+0000 mgr.y (mgr.14152) 240 : cluster [DBG] pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.201168+0000 mgr.y (mgr.14152) 241 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm00=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: cephadm 2026-03-10T14:51:49.202205+0000 mgr.y (mgr.14152) 242 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.207294+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.208479+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.525764+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.526470+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.588908+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.591989+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.600547+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.609780+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 bash[28403]: audit 2026-03-10T14:51:49.616980+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:50.423 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.423 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: Started Ceph rgw.foo.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:50.717 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:51:50 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:51.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:51 vm00 bash[20726]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:51.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:51.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:51 vm00 bash[28403]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: cephadm 2026-03-10T14:51:49.617960+0000 mgr.y (mgr.14152) 243 : cephadm [INF] Deploying daemon rgw.foo.a on vm00 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.674920+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.681554+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.687926+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.692386+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.696414+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:51 vm03 bash[23394]: audit 2026-03-10T14:51:50.706543+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:52 vm03 bash[23394]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:52 vm00 bash[28403]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cephadm 2026-03-10T14:51:50.688508+0000 mgr.y (mgr.14152) 244 : cephadm [INF] Saving service rgw.foo.a spec with placement vm00=foo.a;count:1 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cluster 2026-03-10T14:51:50.735786+0000 mgr.y (mgr.14152) 245 : cluster [DBG] pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: cluster 2026-03-10T14:51:51.716587+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: audit 2026-03-10T14:51:51.718248+0000 mon.b (mon.1) 26 : audit [INF] from='client.? 192.168.123.100:0/1435355823' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:52 vm00 bash[20726]: audit 2026-03-10T14:51:51.723508+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:53 vm00 bash[28403]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:53 vm00 bash[20726]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.245 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: audit 2026-03-10T14:51:52.709048+0000 mon.a (mon.0) 663 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: cluster 2026-03-10T14:51:52.717919+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.283 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:53 vm03 bash[23394]: cluster 2026-03-10T14:51:52.736037+0000 mgr.y (mgr.14152) 246 : cluster [DBG] pgmap v226: 33 pgs: 16 creating+peering, 1 active+clean, 16 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:54.931 INFO:teuthology.orchestra.run.vm03.stderr:pool 'datapool' created 2026-03-10T14:51:55.013 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- rbd pool init datapool 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:54 vm00 bash[28403]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:54 vm00 bash[20726]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: cluster 2026-03-10T14:51:53.906150+0000 mon.a (mon.0) 665 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:53.911133+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:53.911991+0000 mon.a (mon.0) 666 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.517161+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.103:0/3605141734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.517467+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.902614+0000 mon.a (mon.0) 668 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: audit 2026-03-10T14:51:54.902655+0000 mon.a (mon.0) 669 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:54 vm03 bash[23394]: cluster 2026-03-10T14:51:54.907799+0000 mon.a (mon.0) 670 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:55 vm00 bash[28403]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:55 vm00 bash[20726]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:54.736366+0000 mgr.y (mgr.14152) 247 : cluster [DBG] pgmap v228: 65 pgs: 16 creating+peering, 10 active+clean, 39 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:54.918834+0000 mon.a (mon.0) 671 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.627437+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.786644+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.793466+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.794391+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.795109+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: cluster 2026-03-10T14:51:55.909086+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.924033+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:55 vm03 bash[23394]: audit 2026-03-10T14:51:55.924744+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:56 vm00 bash[28403]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:57.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:56 vm00 bash[20726]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: cephadm 2026-03-10T14:51:55.798303+0000 mgr.y (mgr.14152) 248 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: audit 2026-03-10T14:51:56.913348+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:56 vm03 bash[23394]: cluster 2026-03-10T14:51:56.926180+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:57 vm00 bash[28403]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:57 vm00 bash[20726]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: cluster 2026-03-10T14:51:56.736727+0000 mgr.y (mgr.14152) 249 : cluster [DBG] pgmap v231: 100 pgs: 22 creating+peering, 25 active+clean, 53 unknown; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1017 B/s rd, 254 B/s wr, 1 op/s 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: cluster 2026-03-10T14:51:57.926418+0000 mon.a (mon.0) 681 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.927289+0000 mon.c (mon.2) 16 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.928218+0000 mon.c (mon.2) 17 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.929020+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:58.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:57 vm03 bash[23394]: audit 2026-03-10T14:51:57.929124+0000 mon.a (mon.0) 683 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T14:51:59.653 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:51:59 vm00 bash[28403]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:51:59 vm00 bash[20726]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.217 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[53572]: debug 2026-03-10T14:52:00.012+0000 7f9248565980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: cluster 2026-03-10T14:51:58.737112+0000 mgr.y (mgr.14152) 250 : cluster [DBG] pgmap v234: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.920018+0000 mon.a (mon.0) 684 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.920206+0000 mon.a (mon.0) 685 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.932099+0000 mon.c (mon.2) 18 : audit [INF] from='client.? 192.168.123.100:0/1863131937' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.932272+0000 mon.c (mon.2) 19 : audit [INF] from='client.? 192.168.123.100:0/224753364' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: cluster 2026-03-10T14:51:58.932636+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.938040+0000 mon.a (mon.0) 687 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:58.939129+0000 mon.a (mon.0) 688 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:59.847525+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.103:0/2471747460' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:51:59 vm03 bash[23394]: audit 2026-03-10T14:51:59.849700+0000 mon.a (mon.0) 689 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:00 vm00 bash[28403]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:00 vm00 bash[20726]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.934780+0000 mon.a (mon.0) 690 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.935002+0000 mon.a (mon.0) 691 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:51:59.935169+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: cluster 2026-03-10T14:51:59.944119+0000 mon.a (mon.0) 693 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.176912+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.192346+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.209878+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.248940+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.656084+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.656858+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:00 vm03 bash[23394]: audit 2026-03-10T14:52:00.889861+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:02.158 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.103 --placement '1;vm03=iscsi.a' 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:02 vm03 bash[23394]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:02 vm00 bash[28403]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cephadm 2026-03-10T14:52:00.659671+0000 mgr.y (mgr.14152) 251 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:00.737633+0000 mgr.y (mgr.14152) 252 : cluster [DBG] pgmap v237: 132 pgs: 20 creating+peering, 86 active+clean, 26 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:00.953403+0000 mon.a (mon.0) 701 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:01.208186+0000 mon.a (mon.0) 702 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled) 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:02.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:02 vm00 bash[20726]: cluster 2026-03-10T14:52:01.208213+0000 mon.a (mon.0) 703 : cluster [INF] Cluster is now healthy 2026-03-10T14:52:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:03 vm03 bash[23394]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:03 vm03 bash[23394]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:03.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:03 vm00 bash[28403]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:03.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:03 vm00 bash[28403]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:03 vm00 bash[20726]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:03 vm00 bash[20726]: cluster 2026-03-10T14:52:01.953979+0000 mon.a (mon.0) 704 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T14:52:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:04 vm03 bash[23394]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:04 vm03 bash[23394]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:04.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:04 vm00 bash[28403]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:04.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:04 vm00 bash[28403]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:04.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:04 vm00 bash[20726]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:04.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:04 vm00 bash[20726]: cluster 2026-03-10T14:52:02.738096+0000 mgr.y (mgr.14152) 253 : cluster [DBG] pgmap v240: 132 pgs: 6 creating+peering, 126 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 103 KiB/s rd, 8.0 KiB/s wr, 243 op/s 2026-03-10T14:52:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:06 vm03 bash[23394]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:06 vm03 bash[23394]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:06 vm00 bash[28403]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:06 vm00 bash[28403]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:06 vm00 bash[20726]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:06 vm00 bash[20726]: cluster 2026-03-10T14:52:04.738456+0000 mgr.y (mgr.14152) 254 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 195 op/s 2026-03-10T14:52:06.787 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:07.090 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled iscsi.datapool update... 2026-03-10T14:52:07.303 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-10T14:52:07.303 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:52:07.303 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T14:52:07.317 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:52:07.317 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T14:52:07.325 DEBUG:teuthology.orchestra.run.vm03:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@iscsi.iscsi.a.service 2026-03-10T14:52:07.367 INFO:tasks.cephadm:Adding prometheus.a on vm03 2026-03-10T14:52:07.367 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply prometheus '1;vm03=a' 2026-03-10T14:52:07.938 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:07.939 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:07.939 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:07.939 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:07.939 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:07.939 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.204 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.204 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.204 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:08 vm03 bash[23394]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.205 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.205 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:07 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:08 vm00 bash[28403]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cluster 2026-03-10T14:52:06.738806+0000 mgr.y (mgr.14152) 255 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 5.3 KiB/s wr, 167 op/s 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.086913+0000 mgr.y (mgr.14152) 256 : audit [DBG] from='client.24382 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.103", "placement": "1;vm03=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cephadm 2026-03-10T14:52:07.088312+0000 mgr.y (mgr.14152) 257 : cephadm [INF] Saving service iscsi.datapool spec with placement vm03=iscsi.a;count:1 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.091833+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.092521+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.093874+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.094392+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.098376+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.099770+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.101799+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:07.106078+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: cephadm 2026-03-10T14:52:07.106547+0000 mgr.y (mgr.14152) 258 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm03 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.066273+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.076346+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:08 vm00 bash[20726]: audit 2026-03-10T14:52:08.085097+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:08.625 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: debug Processing osd blocklist entries for this node 2026-03-10T14:52:09.121 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: debug Reading the configuration object to update local LIO configuration 2026-03-10T14:52:09.121 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: debug Configuration does not have an entry for this host(vm03.local) - nothing to define to LIO 2026-03-10T14:52:09.121 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: * Environment: production 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: Use a production WSGI server instead. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: * Debug mode: off 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: debug * Running on all addresses. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: * Running on all addresses. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T14:52:09.122 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:08 vm03 bash[48459]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:09 vm03 bash[23394]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:09 vm00 bash[28403]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: cephadm 2026-03-10T14:52:08.085981+0000 mgr.y (mgr.14152) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.114576+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.125493+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:09.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:09 vm00 bash[20726]: audit 2026-03-10T14:52:08.639589+0000 mon.b (mon.1) 28 : audit [DBG] from='client.? 192.168.123.103:0/2005582416' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T14:52:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:10 vm03 bash[23394]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:10 vm03 bash[23394]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:10 vm03 bash[23394]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:10 vm03 bash[23394]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:10 vm00 bash[28403]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:10 vm00 bash[28403]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:10 vm00 bash[28403]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:10 vm00 bash[28403]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:10 vm00 bash[20726]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:10 vm00 bash[20726]: cluster 2026-03-10T14:52:08.739314+0000 mgr.y (mgr.14152) 260 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 142 op/s 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:10 vm00 bash[20726]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:10 vm00 bash[20726]: cluster 2026-03-10T14:52:09.139622+0000 mon.a (mon.0) 718 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:11 vm00 bash[28403]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:11 vm00 bash[28403]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:11 vm00 bash[28403]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:11 vm00 bash[28403]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:11 vm00 bash[20726]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:11 vm00 bash[20726]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:11 vm00 bash[20726]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:11 vm00 bash[20726]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:12.109 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:11 vm03 bash[23394]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:11 vm03 bash[23394]: audit 2026-03-10T14:52:10.636575+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:11 vm03 bash[23394]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:11 vm03 bash[23394]: cluster 2026-03-10T14:52:10.739652+0000 mgr.y (mgr.14152) 261 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 116 op/s 2026-03-10T14:52:12.494 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled prometheus update... 2026-03-10T14:52:12.557 DEBUG:teuthology.orchestra.run.vm03:prometheus.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@prometheus.a.service 2026-03-10T14:52:12.558 INFO:tasks.cephadm:Adding node-exporter.a on vm00 2026-03-10T14:52:12.558 INFO:tasks.cephadm:Adding node-exporter.b on vm03 2026-03-10T14:52:12.558 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply node-exporter '2;vm00=a;vm03=b' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:13.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:13 vm03 bash[23394]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:13 vm00 bash[28403]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.455412+0000 mgr.y (mgr.14152) 262 : audit [DBG] from='client.24415 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cephadm 2026-03-10T14:52:12.456325+0000 mgr.y (mgr.14152) 263 : cephadm [INF] Saving service prometheus spec with placement vm03=a;count:1 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.492941+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.498115+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.512371+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.513317+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.513878+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: audit 2026-03-10T14:52:12.519538+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cephadm 2026-03-10T14:52:12.682675+0000 mgr.y (mgr.14152) 264 : cephadm [INF] Deploying daemon prometheus.a on vm03 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cluster 2026-03-10T14:52:12.740169+0000 mgr.y (mgr.14152) 265 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 54 op/s 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:13.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:13 vm00 bash[20726]: cluster 2026-03-10T14:52:13.408400+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T14:52:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:15 vm03 bash[23394]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:15 vm03 bash[23394]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:16.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:15 vm00 bash[28403]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:16.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:15 vm00 bash[28403]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:15 vm00 bash[20726]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:15 vm00 bash[20726]: cluster 2026-03-10T14:52:14.740458+0000 mgr.y (mgr.14152) 266 : cluster [DBG] pgmap v247: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:17.259 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:18.302 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled node-exporter update... 2026-03-10T14:52:18.313 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:17 vm03 bash[23394]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.313 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:17 vm03 bash[23394]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:17 vm00 bash[28403]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:17 vm00 bash[28403]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:17 vm00 bash[20726]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:17 vm00 bash[20726]: cluster 2026-03-10T14:52:16.740974+0000 mgr.y (mgr.14152) 267 : cluster [DBG] pgmap v248: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:52:18.469 DEBUG:teuthology.orchestra.run.vm00:node-exporter.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.a.service 2026-03-10T14:52:18.470 DEBUG:teuthology.orchestra.run.vm03:node-exporter.b> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.b.service 2026-03-10T14:52:18.472 INFO:tasks.cephadm:Adding alertmanager.a on vm00 2026-03-10T14:52:18.472 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply alertmanager '1;vm00=a' 2026-03-10T14:52:18.625 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:19.365 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 bash[23394]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:19.365 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.366 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.366 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.366 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.366 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.366 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: Started Ceph prometheus.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.625 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:19 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:19 vm00 bash[28403]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.276498+0000 mgr.y (mgr.14152) 268 : audit [DBG] from='client.24421 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: cephadm 2026-03-10T14:52:18.277481+0000 mgr.y (mgr.14152) 269 : cephadm [INF] Saving service node-exporter spec with placement vm00=a;vm03=b;count:2 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.303528+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: audit 2026-03-10T14:52:18.487615+0000 mgr.y (mgr.14152) 270 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:19.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:19 vm00 bash[20726]: cluster 2026-03-10T14:52:18.741424+0000 mgr.y (mgr.14152) 271 : cluster [DBG] pgmap v249: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 102 B/s wr, 1 op/s 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.742Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.742Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.742Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm03 (none))" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.742Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.742Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.743Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.744Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.745Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.745Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.543µs 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.745Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.746Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.746Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.746Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.746Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=24.255µs wal_replay_duration=722.223µs wbl_replay_duration=130ns total_replay_duration=921.745µs 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.751Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.751Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.751Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.765Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=14.11569ms db_storage=662ns remote_storage=1.122µs web_handler=371ns query_engine=431ns scrape=775.631µs scrape_sd=95.058µs notify=472ns notify_sd=581ns rules=12.702425ms tracing=102.692µs 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.765Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T14:52:20.125 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:19 vm03 bash[49425]: ts=2026-03-10T14:52:19.765Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:20 vm00 bash[28403]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:20 vm00 bash[20726]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:20 vm00 bash[21005]: ignoring --setuser ceph since I am not root 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:20 vm00 bash[21005]: ignoring --setgroup ceph since I am not root 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:20 vm00 bash[21005]: debug 2026-03-10T14:52:20.804+0000 7f1387407140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:52:20.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:20 vm00 bash[21005]: debug 2026-03-10T14:52:20.844+0000 7f1387407140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:20 vm03 bash[24110]: ignoring --setuser ceph since I am not root 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:20 vm03 bash[24110]: ignoring --setgroup ceph since I am not root 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:20 vm03 bash[24110]: debug 2026-03-10T14:52:20.825+0000 7f8eae62a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:20 vm03 bash[24110]: debug 2026-03-10T14:52:20.869+0000 7f8eae62a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.646143+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.652068+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.656870+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:21.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:20 vm03 bash[23394]: audit 2026-03-10T14:52:19.659431+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:52:21.282 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:20 vm00 bash[21005]: debug 2026-03-10T14:52:20.972+0000 7f1387407140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:52:21.310 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:21 vm03 bash[24110]: debug 2026-03-10T14:52:21.005+0000 7f8eae62a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T14:52:21.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:21 vm03 bash[24110]: debug 2026-03-10T14:52:21.305+0000 7f8eae62a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:52:21.648 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: debug 2026-03-10T14:52:21.280+0000 7f1387407140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:21 vm00 bash[28403]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:21.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:21 vm00 bash[20726]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:21.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: debug 2026-03-10T14:52:21.764+0000 7f1387407140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:52:21.967 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: debug 2026-03-10T14:52:21.852+0000 7f1387407140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:21 vm03 bash[24110]: debug 2026-03-10T14:52:21.777+0000 7f8eae62a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:21 vm03 bash[24110]: debug 2026-03-10T14:52:21.861+0000 7f8eae62a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: audit 2026-03-10T14:52:20.645035+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: audit 2026-03-10T14:52:20.686555+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14152 192.168.123.100:0/2121680183' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:22.003 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:21 vm03 bash[23394]: cluster 2026-03-10T14:52:20.704564+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: from numpy import show_config as show_numpy_config 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:21 vm00 bash[21005]: debug 2026-03-10T14:52:21.988+0000 7f1387407140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.148+0000 7f1387407140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:52:22.240 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.192+0000 7f1387407140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:52:22.266 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T14:52:22.266 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T14:52:22.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: from numpy import show_config as show_numpy_config 2026-03-10T14:52:22.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.005+0000 7f8eae62a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T14:52:22.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.173+0000 7f8eae62a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T14:52:22.267 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.221+0000 7f8eae62a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T14:52:22.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.261+0000 7f8eae62a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:52:22.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.317+0000 7f8eae62a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:52:22.625 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.373+0000 7f8eae62a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:52:22.716 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.236+0000 7f1387407140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T14:52:22.716 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.288+0000 7f1387407140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T14:52:22.716 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.348+0000 7f1387407140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T14:52:23.125 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.869+0000 7f8eae62a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:52:23.125 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.913+0000 7f8eae62a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:52:23.125 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:22 vm03 bash[24110]: debug 2026-03-10T14:52:22.965+0000 7f8eae62a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:52:23.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.844+0000 7f1387407140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T14:52:23.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.884+0000 7f1387407140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T14:52:23.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:22 vm00 bash[21005]: debug 2026-03-10T14:52:22.928+0000 7f1387407140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T14:52:23.136 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.088+0000 7f1387407140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:52:23.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.132+0000 7f1387407140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:52:23.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.180+0000 7f1387407140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:52:23.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.304+0000 7f1387407140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:52:23.519 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.121+0000 7f8eae62a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T14:52:23.519 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.165+0000 7f8eae62a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T14:52:23.519 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.209+0000 7f8eae62a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T14:52:23.520 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.337+0000 7f8eae62a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:52:23.758 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.484+0000 7f1387407140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:52:23.758 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.672+0000 7f1387407140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:52:23.758 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.712+0000 7f1387407140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:52:23.790 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.517+0000 7f8eae62a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T14:52:23.796 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.701+0000 7f8eae62a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T14:52:23.796 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.737+0000 7f8eae62a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T14:52:24.125 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.785+0000 7f8eae62a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:52:24.125 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:23 vm03 bash[24110]: debug 2026-03-10T14:52:23.957+0000 7f8eae62a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:52:24.170 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:24.188 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.756+0000 7f1387407140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T14:52:24.188 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:23 vm00 bash[21005]: debug 2026-03-10T14:52:23.916+0000 7f1387407140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:24 vm00 bash[28403]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:24 vm00 bash[20726]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: debug 2026-03-10T14:52:24.184+0000 7f1387407140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: [10/Mar/2026:14:52:24] ENGINE Bus STARTING 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: CherryPy Checker: 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: The Application mounted at '' has an empty config. 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: [10/Mar/2026:14:52:24] ENGINE Serving on http://:::9283 2026-03-10T14:52:24.468 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:24 vm00 bash[21005]: [10/Mar/2026:14:52:24] ENGINE Bus STARTED 2026-03-10T14:52:24.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.193987+0000 mon.a (mon.0) 735 : cluster [INF] Active manager daemon y restarted 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.194232+0000 mon.a (mon.0) 736 : cluster [INF] Activating manager daemon y 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.217257+0000 mon.a (mon.0) 737 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.218206+0000 mon.a (mon.0) 738 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0240629s), standbys: x 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.227841+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.227942+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.228052+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229221+0000 mon.a (mon.0) 742 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229336+0000 mon.a (mon.0) 743 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229456+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229588+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.229706+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230066+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230339+0000 mon.a (mon.0) 748 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.230716+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231088+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231389+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231675+0000 mon.a (mon.0) 752 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.231900+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: audit 2026-03-10T14:52:24.232428+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:24 vm03 bash[23394]: cluster 2026-03-10T14:52:24.240753+0000 mon.a (mon.0) 755 : cluster [INF] Manager daemon y is now available 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: debug 2026-03-10T14:52:24.265+0000 7f8eae62a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: [10/Mar/2026:14:52:24] ENGINE Bus STARTING 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: CherryPy Checker: 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: The Application mounted at '' has an empty config. 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: [10/Mar/2026:14:52:24] ENGINE Serving on http://:::9283 2026-03-10T14:52:24.627 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:24 vm03 bash[24110]: [10/Mar/2026:14:52:24] ENGINE Bus STARTED 2026-03-10T14:52:25.253 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled alertmanager update... 2026-03-10T14:52:25.332 DEBUG:teuthology.orchestra.run.vm00:alertmanager.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@alertmanager.a.service 2026-03-10T14:52:25.333 INFO:tasks.cephadm:Adding grafana.a on vm03 2026-03-10T14:52:25.333 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph orch apply grafana '1;vm03=a' 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:25 vm03 bash[23394]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:25 vm00 bash[28403]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.271741+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:24.274277+0000 mon.a (mon.0) 757 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:24.274441+0000 mon.a (mon.0) 758 : cluster [DBG] Standby manager daemon x started 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.279394+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.281387+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.282025+0000 mon.a (mon.0) 761 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.282280+0000 mon.a (mon.0) 762 : audit [DBG] from='mgr.? 192.168.123.103:0/1359588603' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.287891+0000 mon.a (mon.0) 763 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.288008+0000 mon.a (mon.0) 764 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.309719+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:24.348436+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: cluster 2026-03-10T14:52:25.227381+0000 mon.a (mon.0) 767 : cluster [DBG] mgrmap e19: y(active, since 1.03324s), standbys: x 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:25.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:25 vm00 bash[20726]: audit 2026-03-10T14:52:25.254158+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:26 vm03 bash[23394]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:26 vm00 bash[28403]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.238906+0000 mgr.y (mgr.24425) 2 : cephadm [INF] Saving service alertmanager spec with placement vm00=a;count:1 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cluster 2026-03-10T14:52:25.248382+0000 mgr.y (mgr.24425) 3 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.521885+0000 mgr.y (mgr.24425) 4 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTING 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.623439+0000 mgr.y (mgr.24425) 5 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732301+0000 mgr.y (mgr.24425) 6 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732415+0000 mgr.y (mgr.24425) 7 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Bus STARTED 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:26 vm00 bash[20726]: cephadm 2026-03-10T14:52:25.732678+0000 mgr.y (mgr.24425) 8 : cephadm [INF] [10/Mar/2026:14:52:25] ENGINE Client ('192.168.123.100', 43654) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:52:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:27 vm03 bash[23394]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:27 vm03 bash[23394]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:27 vm03 bash[23394]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:27 vm03 bash[23394]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:27 vm00 bash[28403]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:27 vm00 bash[28403]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:27 vm00 bash[28403]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:27 vm00 bash[28403]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:27 vm00 bash[20726]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:27 vm00 bash[20726]: cluster 2026-03-10T14:52:26.229944+0000 mgr.y (mgr.24425) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:27 vm00 bash[20726]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:27 vm00 bash[20726]: cluster 2026-03-10T14:52:26.319862+0000 mon.a (mon.0) 769 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T14:52:28.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:52:29.538 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.613 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:29 vm03 bash[23394]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:29 vm00 bash[20726]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: cluster 2026-03-10T14:52:28.230290+0000 mgr.y (mgr.24425) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: cluster 2026-03-10T14:52:28.362423+0000 mon.a (mon.0) 770 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.717 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:29 vm00 bash[28403]: audit 2026-03-10T14:52:28.497232+0000 mgr.y (mgr.24425) 11 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:29.859 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled grafana update... 2026-03-10T14:52:29.944 DEBUG:teuthology.orchestra.run.vm03:grafana.a> sudo journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@grafana.a.service 2026-03-10T14:52:29.946 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T14:52:29.946 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:30.829 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:30 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:31.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:30 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.819712+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.836533+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.848338+0000 mgr.y (mgr.24425) 12 : audit [DBG] from='client.24446 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:29.849612+0000 mgr.y (mgr.24425) 13 : cephadm [INF] Saving service grafana spec with placement vm03=a;count:1 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:29.855073+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.026734+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.085864+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cluster 2026-03-10T14:52:30.230625+0000 mgr.y (mgr.24425) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.549398+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.554283+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.555177+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.720331+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.725425+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.726354+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.728411+0000 mon.a (mon.0) 782 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: audit 2026-03-10T14:52:30.728841+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.729487+0000 mgr.y (mgr.24425) 15 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:52:31.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:30 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.729643+0000 mgr.y (mgr.24425) 16 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:52:31.588 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.588 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.589 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.915 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.915 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.915 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.915 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.915 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.916 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.916 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.916 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:31.916 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:31 vm00 systemd[1]: Started Ceph node-exporter.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:31.916 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[55421]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:31 vm00 bash[20726]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.217 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:31 vm00 bash[28403]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.238 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.238 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.238 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.238 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.768566+0000 mgr.y (mgr.24425) 17 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.774907+0000 mgr.y (mgr.24425) 18 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.conf 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.809492+0000 mgr.y (mgr.24425) 19 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.817091+0000 mgr.y (mgr.24425) 20 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.866484+0000 mgr.y (mgr.24425) 21 : cephadm [INF] Updating vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.238 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.873930+0000 mgr.y (mgr.24425) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/config/ceph.client.admin.keyring 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.917768+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.924007+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.928013+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.932452+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:30.938537+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:30.940244+0000 mgr.y (mgr.24425) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm00 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.703628+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.708757+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: audit 2026-03-10T14:52:31.713542+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:31 vm03 bash[23394]: cephadm 2026-03-10T14:52:31.714060+0000 mgr.y (mgr.24425) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm03 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.239 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.239 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.239 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.239 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.512 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.512 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.512 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.512 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.512 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: Started Ceph node-exporter.b for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.513 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:32 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:32.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:32 vm03 bash[50178]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T14:52:33.145 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:32 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:33.466 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:33.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:33 vm00 bash[28403]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:33.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[20726]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:33.771 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: 2abcce694348: Pulling fs layer 2026-03-10T14:52:33.771 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: 455fd88e5221: Pulling fs layer 2026-03-10T14:52:33.771 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: 324153f2810a: Pulling fs layer 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: cluster 2026-03-10T14:52:32.231076+0000 mgr.y (mgr.24425) 25 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.526458+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.532387+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.536204+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.542341+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: audit 2026-03-10T14:52:32.547368+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:33 vm03 bash[23394]: cephadm 2026-03-10T14:52:32.552937+0000 mgr.y (mgr.24425) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm00 2026-03-10T14:52:34.036 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:52:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:52:34.036 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: 455fd88e5221: Verifying Checksum 2026-03-10T14:52:34.036 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:33 vm00 bash[55421]: 455fd88e5221: Download complete 2026-03-10T14:52:34.357 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 2abcce694348: Verifying Checksum 2026-03-10T14:52:34.357 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 2abcce694348: Download complete 2026-03-10T14:52:34.357 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 2abcce694348: Pull complete 2026-03-10T14:52:34.357 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 324153f2810a: Verifying Checksum 2026-03-10T14:52:34.357 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 324153f2810a: Download complete 2026-03-10T14:52:34.358 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 455fd88e5221: Pull complete 2026-03-10T14:52:34.602 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:34 vm03 bash[50178]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T14:52:34.611 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: 324153f2810a: Pull complete 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.498Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.498Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.498Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T14:52:34.716 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T14:52:34.717 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:34 vm00 bash[55421]: ts=2026-03-10T14:52:34.499Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T14:52:34.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:34 vm03 bash[50178]: 2abcce694348: Pulling fs layer 2026-03-10T14:52:34.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:34 vm03 bash[50178]: 455fd88e5221: Pulling fs layer 2026-03-10T14:52:34.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:34 vm03 bash[50178]: 324153f2810a: Pulling fs layer 2026-03-10T14:52:35.203 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-10T14:52:35.203 INFO:teuthology.orchestra.run.vm00.stdout: key = AQAzMLBp1IerCxAAma4SW7aAiOBaDRg/ibKg9A== 2026-03-10T14:52:35.285 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:52:35.285 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T14:52:35.285 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T14:52:35.312 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 455fd88e5221: Verifying Checksum 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 455fd88e5221: Download complete 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 2abcce694348: Verifying Checksum 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 2abcce694348: Download complete 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 2abcce694348: Pull complete 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 324153f2810a: Verifying Checksum 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 324153f2810a: Download complete 2026-03-10T14:52:35.375 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 455fd88e5221: Pull complete 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[23394]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.593 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: 324153f2810a: Pull complete 2026-03-10T14:52:35.593 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T14:52:35.593 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:35 vm00 bash[28403]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: cluster 2026-03-10T14:52:34.231317+0000 mgr.y (mgr.24425) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:34.299696+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.195307+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.100:0/1736409559' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.195696+0000 mon.a (mon.0) 798 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:35.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:35 vm00 bash[20726]: audit 2026-03-10T14:52:35.200890+0000 mon.a (mon.0) 799 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.592Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.592Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.593Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.594Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.594Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.594Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T14:52:35.875 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.595Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.596Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T14:52:35.876 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:35 vm03 bash[50178]: ts=2026-03-10T14:52:35.596Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T14:52:36.859 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:36.859 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:36 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: Started Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.178 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.179 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.179 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.179 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.179 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.179 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 14:52:37 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE Bus STOPPING 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE Bus STOPPED 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE Bus STARTING 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE Serving on http://:::9283 2026-03-10T14:52:37.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:37 vm00 bash[21005]: [10/Mar/2026:14:52:37] ENGINE Bus STARTED 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.254Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.254Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.258Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.258Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.281Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.281Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.283Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T14:52:37.466 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[55880]: ts=2026-03-10T14:52:37.283Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:37 vm03 bash[23394]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.876 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:37 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:37 vm00 bash[28403]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: cluster 2026-03-10T14:52:36.231796+0000 mgr.y (mgr.24425) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.119808+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.126309+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.138354+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.145002+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.195191+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.199693+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.202255+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:37 vm00 bash[20726]: audit 2026-03-10T14:52:37.206657+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:38 vm03 bash[23394]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:38.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:38 vm00 bash[28403]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: cephadm 2026-03-10T14:52:37.151458+0000 mgr.y (mgr.24425) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: audit 2026-03-10T14:52:37.202680+0000 mgr.y (mgr.24425) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:38.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:38 vm00 bash[20726]: cephadm 2026-03-10T14:52:37.215864+0000 mgr.y (mgr.24425) 31 : cephadm [INF] Deploying daemon grafana.a on vm03 2026-03-10T14:52:39.528 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[55880]: ts=2026-03-10T14:52:39.259Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000252973s 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:39 vm03 bash[23394]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:39.956 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.b/config 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:39 vm00 bash[28403]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: cluster 2026-03-10T14:52:38.232190+0000 mgr.y (mgr.24425) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:38.506171+0000 mgr.y (mgr.24425) 33 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:39.308126+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:39 vm00 bash[20726]: audit 2026-03-10T14:52:39.315357+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:40.339 INFO:teuthology.orchestra.run.vm03.stdout:[client.1] 2026-03-10T14:52:40.339 INFO:teuthology.orchestra.run.vm03.stdout: key = AQA4MLBpqjF7ExAAJEFdD1XW3j54s5tMlvreSQ== 2026-03-10T14:52:40.570 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:52:40.570 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T14:52:40.570 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T14:52:40.588 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T14:52:40.588 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T14:52:40.588 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph mgr dump --format=json 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:40 vm03 bash[23394]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:40 vm00 bash[28403]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.323312+0000 mon.b (mon.1) 29 : audit [INF] from='client.? 192.168.123.103:0/3028420414' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.326671+0000 mon.a (mon.0) 810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:40.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:40 vm00 bash[20726]: audit 2026-03-10T14:52:40.337694+0000 mon.a (mon.0) 811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:52:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:41 vm03 bash[23394]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:41 vm03 bash[23394]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:41.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:41 vm00 bash[28403]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:41.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:41 vm00 bash[28403]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:41.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:41 vm00 bash[20726]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:41.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:41 vm00 bash[20726]: cluster 2026-03-10T14:52:40.232602+0000 mgr.y (mgr.24425) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T14:52:43.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:43 vm03 bash[23394]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:43.877 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:43 vm03 bash[23394]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:43.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:43 vm00 bash[28403]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:43.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:43 vm00 bash[28403]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:43.966 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:52:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:52:43.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:43 vm00 bash[20726]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:43.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:43 vm00 bash[20726]: cluster 2026-03-10T14:52:42.233131+0000 mgr.y (mgr.24425) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T14:52:45.232 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:52:45.554 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:52:45.682 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"flags":0,"active_gid":24425,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":1481551033}]},"active_addr":"192.168.123.100:6800/1481551033","active_change":"2026-03-10T14:52:24.194133+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14529,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":69,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1914497732}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2750219356}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":4039009452}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":4035991365}]}]} 2026-03-10T14:52:45.683 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T14:52:45.684 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T14:52:45.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd dump --format=json 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:45 vm00 bash[20726]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:45 vm00 bash[20726]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:45 vm00 bash[20726]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:45 vm00 bash[20726]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:45 vm00 bash[28403]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:45 vm00 bash[28403]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:45 vm00 bash[28403]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:45.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:45 vm00 bash[28403]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:45 vm03 bash[23394]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:45 vm03 bash[23394]: cluster 2026-03-10T14:52:44.233516+0000 mgr.y (mgr.24425) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:45 vm03 bash[23394]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:45 vm03 bash[23394]: audit 2026-03-10T14:52:45.550161+0000 mon.c (mon.2) 21 : audit [DBG] from='client.? 192.168.123.100:0/166612695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:52:46.785 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.785 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.787 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:46.787 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T14:52:47.042 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:46 vm03 systemd[1]: Started Ceph grafana.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:47.042 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:46 vm03 bash[23394]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.042 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:46 vm03 bash[23394]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:46 vm00 bash[28403]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:46 vm00 bash[28403]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:46 vm00 bash[20726]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:46 vm00 bash[20726]: cluster 2026-03-10T14:52:46.233983+0000 mgr.y (mgr.24425) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039816093Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T14:52:47Z 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039960062Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039963448Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039965512Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039967255Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039984307Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039986111Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039987714Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039989497Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039991151Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039992603Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039994086Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.039996871Z level=info msg=Target target=[all] 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040000348Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040002011Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040003574Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040006469Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040009535Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=settings t=2026-03-10T14:52:47.040011248Z level=info msg="App mode production" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=sqlstore t=2026-03-10T14:52:47.04020478Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=sqlstore t=2026-03-10T14:52:47.040215239Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.040587627Z level=info msg="Starting DB migrations" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.041254404Z level=info msg="Executing migration" id="create migration_log table" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.041725025Z level=info msg="Migration successfully executed" id="create migration_log table" duration=470.4µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.045636375Z level=info msg="Executing migration" id="create user table" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.0466325Z level=info msg="Migration successfully executed" id="create user table" duration=996.336µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.048693778Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.049432068Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=738.762µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.050965599Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.051604434Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=638.484µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.05310927Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.053741744Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=638.484µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.055440544Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.056038372Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=597.899µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.057289103Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.058488798Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.199354ms 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.060006278Z level=info msg="Executing migration" id="create user table v2" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.060657807Z level=info msg="Migration successfully executed" id="create user table v2" duration=653.412µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.062103161Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.062731729Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=628.537µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.064039365Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.064631284Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=590.665µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.066108679Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.06652064Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=410.458µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.068251098Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.068776722Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=525.814µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.069987929Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.070687347Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=699.238µs 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.072021274Z level=info msg="Executing migration" id="Update user table charset" 2026-03-10T14:52:47.295 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.072034198Z level=info msg="Migration successfully executed" id="Update user table charset" duration=14.457µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.073451079Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.074184342Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=733.042µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.075946589Z level=info msg="Executing migration" id="Add missing user data" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.076313947Z level=info msg="Migration successfully executed" id="Add missing user data" duration=367.457µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.077661298Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.078625563Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=964.545µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.080135689Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.080838143Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=702.354µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.082090137Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.082994379Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=904.092µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.084792735Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.088836433Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=4.040343ms 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.090510275Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.091392265Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=881.7µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.092734988Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.093062101Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=327.253µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.094158061Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.094708753Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=550.702µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.096548124Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.097177192Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=630.171µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.09867238Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.099309522Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=637.062µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.100798037Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.101359439Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=561.642µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.103071312Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.103628285Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=557.153µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.105038975Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.105594715Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=552.794µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.10704565Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.107057803Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=12.814µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.10903338Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.109666595Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=630.36µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.110786842Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.111434323Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=647.602µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.112781875Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.1133294Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=547.765µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.114699124Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.115368946Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=670.033µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.116686893Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.118216506Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.528852ms 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.119651881Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.12032375Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=671.688µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.121863111Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.122468634Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=605.825µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.123817028Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.12441193Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=597.999µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.125527829Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.126086023Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=558.295µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.127849403Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.128450829Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=601.466µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.129954483Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.130373417Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=419.053µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.131763338Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.132285686Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=520.835µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.133409799Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.133796243Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=386.224µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.135570713Z level=info msg="Executing migration" id="create star table" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.136162211Z level=info msg="Migration successfully executed" id="create star table" duration=596.046µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.13728411Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.137922986Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=636.051µs 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.139264726Z level=info msg="Executing migration" id="create org table v1" 2026-03-10T14:52:47.296 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.139828732Z level=info msg="Migration successfully executed" id="create org table v1" duration=563.875µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.141613112Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.142261715Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=648.302µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.143652598Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.144217355Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=564.437µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.145497551Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.146103275Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=604.5µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.147556304Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.148350851Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=792.675µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.15024213Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.151064909Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=823.902µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.152531795Z level=info msg="Executing migration" id="Update org table charset" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.152553346Z level=info msg="Migration successfully executed" id="Update org table charset" duration=18.184µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.154063943Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.154322005Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=16.321µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.155725893Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.156079204Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=352.85µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.157610079Z level=info msg="Executing migration" id="create dashboard table" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.158233466Z level=info msg="Migration successfully executed" id="create dashboard table" duration=623.477µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.159499234Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.160091462Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=592.558µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.161332485Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.161987422Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=659.755µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.163415995Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.163993725Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=577.239µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.16572203Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.166307295Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=585.176µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.167588123Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.168209274Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=621.222µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.169321527Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.171182369Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=1.855993ms 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.172977569Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.173676156Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=698.627µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.174987511Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.175634862Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=647.301µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.177144296Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.177820072Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=676.256µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.179576358Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.17997286Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=396.582µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.18119153Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.181885339Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=693.748µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.183363646Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.183614456Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=251.15µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.18948434Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.190910699Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.430958ms 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.192347288Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.193252883Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=905.625µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.194439694Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.199311812Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=4.870084ms 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.201578924Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.202342494Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=764.23µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.203700025Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.204688864Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=972.86µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.205834238Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.206419154Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=584.835µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.208129824Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.208725799Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=596.135µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.210042604Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.210054928Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=12.563µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.211449718Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.211462773Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=13.205µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.212700508Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.213665805Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=962.161µs 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.215456185Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-10T14:52:47.297 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.216403428Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=947.244µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.217694785Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.21857381Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=881.761µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.219903699Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.220810646Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=906.786µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.222064402Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.222340809Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=276.447µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.223569589Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.224160766Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=590.967µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.225396669Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.225947159Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=549.818µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.227439653Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.227454771Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=15.679µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.228595084Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.229182183Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=587.079µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.23023309Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.230807415Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=574.205µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.232422537Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.234213238Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.79064ms 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.235557464Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.23611115Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=553.236µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.237443584Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.238129848Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=686.183µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.239964701Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.240580053Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=615.682µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.241914742Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.242251792Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=337.04µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.243562505Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.244034659Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=472.164µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.245131962Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.246022168Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=899.965µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.247617403Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.248241442Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=624.5µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.249294743Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.249580618Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=285.664µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.250648176Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.250951724Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=300.672µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.252465187Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.253039481Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=574.244µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.254277187Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.255279042Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.005573ms 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.256454893Z level=info msg="Executing migration" id="create data_source table" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.257071507Z level=info msg="Migration successfully executed" id="create data_source table" duration=616.383µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.258713008Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.259367564Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=654.686µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.26085653Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.261473996Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=617.667µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.262864288Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.263500939Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=632.714µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.265109149Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.265700365Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=593.422µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.266772051Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.268786551Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=2.012256ms 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.270151826Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.27077401Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=622.024µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.272412687Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.273200922Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=788.185µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.274355192Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.275084567Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=729.185µs 2026-03-10T14:52:47.298 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.276626613Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.277325711Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=700.661µs 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.278935333Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.280116985Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.1543ms 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.281528978Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.282632172Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.103005ms 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.28397765Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.28399364Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=20.088µs 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.285523563Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.285791425Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=267.621µs 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.286909848Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.287892456Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=982.39µs 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.289990002Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.29009023Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=100.417µs 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.291951443Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-10T14:52:47.299 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.292235744Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=284.312µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.293407437Z level=info msg="Executing migration" id="Add uid column" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.296627735Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.217823ms 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.298329449Z level=info msg="Executing migration" id="Update uid value" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.298464813Z level=info msg="Migration successfully executed" id="Update uid value" duration=136.745µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.300186003Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.30062747Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=442.008µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.301836452Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.302160449Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=324.207µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.303753089Z level=info msg="Executing migration" id="create api_key table" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.304123372Z level=info msg="Migration successfully executed" id="create api_key table" duration=370.463µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.305559139Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.305898104Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=338.985µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.307355621Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.3076756Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=320.059µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.30924655Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.309601534Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=352.689µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.311105679Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.311477776Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=372.508µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.312877736Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.313199217Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=321.431µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.314564191Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.315093061Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=530.051µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.316477202Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.318673973Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.196141ms 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.320551727Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.321104491Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=552.935µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.3223322Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.322901885Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=569.555µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.324282289Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.324788376Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=508.913µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.326167467Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.326649769Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=482.222µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.327968508Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.328322399Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=354.032µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.32932189Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.329742647Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=425.235µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.331273662Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.331439292Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=165.739µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.332492053Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.333448673Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=956.31µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.334501494Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.33554171Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.036099ms 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.336774638Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.336993086Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=218.298µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.338452328Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.339708769Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.255941ms 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.341074164Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.342284259Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.210536ms 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.343609218Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.344162624Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=553.376µs 2026-03-10T14:52:47.546 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.345570578Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.346000734Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=429.794µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.347341603Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.347824115Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=482.262µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.348921569Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.349420214Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=501.018µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.353611727Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.354221869Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=610.352µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.35546743Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.355960453Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=492.683µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.357286335Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.357450883Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=164.628µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.358750495Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.358804065Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=53.961µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.360199787Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.361226348Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.026271ms 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.362206071Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.363229827Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.023535ms 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.364768065Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.364934147Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=168.565µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.365926033Z level=info msg="Executing migration" id="create quota table v1" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.366446557Z level=info msg="Migration successfully executed" id="create quota table v1" duration=523.96µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.367724388Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.368274328Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=549.819µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.369558671Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.369613033Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=58.069µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.371277488Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.371782524Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=504.955µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.37309027Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.37360812Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=515.715µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.395289396Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.396709843Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.420678ms 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.398342148Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.398374599Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=32.821µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.399810416Z level=info msg="Executing migration" id="create session table" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.400360094Z level=info msg="Migration successfully executed" id="create session table" duration=549.428µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.401736861Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.401897703Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=160.512µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.403534526Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.403677253Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=142.396µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.40471765Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.405210562Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=489.296µs 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.40647585Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-10T14:52:47.547 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.40694555Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=475.099µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.40817976Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.408216298Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=36.959µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.409664167Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.409690888Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.191µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.410709573Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.411819771Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.109737ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.412966537Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.41404749Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.078708ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.415428715Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.415588044Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=159.138µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.416620806Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.416769094Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=145.493µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.417768754Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.418233724Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=464.861µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.419495897Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.419520913Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.488µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.420866492Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.422357332Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.49061ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.42360696Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.423806513Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=197.098µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.424788983Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.425954414Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.1652ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.426935809Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.428103906Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.167936ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.429639901Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.429810139Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=172.012µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.431300679Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.431910069Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=609.089µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.433273971Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.433862753Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=585.205µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.440269934Z level=info msg="Executing migration" id="create alert table v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.440898892Z level=info msg="Migration successfully executed" id="create alert table v1" duration=629.318µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.442201389Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.442769272Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=571.138µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.444233441Z level=info msg="Executing migration" id="add index alert state" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.445510992Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.277681ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.446689217Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.447180647Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=491.7µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.448684531Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.449108766Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=424.075µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.450351562Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.450883076Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=531.394µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.452041514Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.452520189Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=480.86µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.453837285Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.456675065Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.837621ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.457886833Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.458360991Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=473.927µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.459633493Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.46016132Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=527.747µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.461797231Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.462069191Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=271.709µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.463262433Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.463691227Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=428.643µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.464693351Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.465261064Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=566.331µs 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.466614968Z level=info msg="Executing migration" id="Add column is_default" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.468352269Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.736519ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.469545031Z level=info msg="Executing migration" id="Add column frequency" 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.470787627Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.242495ms 2026-03-10T14:52:47.548 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.471983686Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.473613004Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.62977ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.475008566Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.476786032Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.776264ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.47799785Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.478506854Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=508.882µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.479788161Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.479800244Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=12.432µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.481084427Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.481095879Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=11.932µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.482481492Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.483043383Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=561.51µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.484380986Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.484971952Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=590.955µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.486279108Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.486857461Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=578.142µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.488456734Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.489087904Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=631.14µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.490253746Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.490959217Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=705.962µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.492210228Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.493738508Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.527387ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.495026138Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.496709779Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.682368ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.49799827Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.498262364Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=263.824µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.499626217Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.500317821Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=691.355µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.501844428Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.502358379Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=514.814µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.503572191Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.504996517Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.424005ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.506096185Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.506281732Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=184.105µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.50792086Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.508412149Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=490.939µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.510698087Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.511633377Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=934.759µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.513293595Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.513459867Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=165.961µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.514871117Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.515386782Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=516.446µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.516636331Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.517136175Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=498.914µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.518301928Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.518807054Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=505.086µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.520315366Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.520808819Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=494.525µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.521984339Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.522514892Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=530.443µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.523780931Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.524320079Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=538.839µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.525858049Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.5258693Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=11.461µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.526886262Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.528248422Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.362209ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.529499865Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.5299996Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=499.704µs 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.531501851Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.53282669Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.324538ms 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.533844044Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-10T14:52:47.549 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.534293495Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=447.998µs 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.535550497Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.536056304Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=505.735µs 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.53758823Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.538089509Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=500.488µs 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.539376168Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.543965035Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=4.587014ms 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.545196961Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-10T14:52:47.550 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.545643817Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=446.625µs 2026-03-10T14:52:47.716 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[55880]: ts=2026-03-10T14:52:47.260Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001925431s 2026-03-10T14:52:47.801 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.548752315Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.549554475Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=803.574µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.550932133Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.551247173Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=313.257µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.552713107Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.553176314Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=463.707µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.554206782Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.554484322Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=278.762µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.55604437Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.557701311Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.656981ms 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.559550303Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.560910869Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.359394ms 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.562254172Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.562805334Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=551.071µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.56409617Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.56465745Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=561.12µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.566213001Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.566426671Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=213.408µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.567630754Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.569021828Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.390672ms 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.570101068Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.570623575Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=522.447µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.571853507Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.572063921Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=208.77µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.573422713Z level=info msg="Executing migration" id="Move region to single row" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.573707486Z level=info msg="Migration successfully executed" id="Move region to single row" duration=285.394µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.574779072Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.575345511Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=565.588µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.576606712Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.577114253Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=507.551µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.578344074Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.578933557Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=589.523µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.580014079Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.580566212Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=551.211µs 2026-03-10T14:52:47.802 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.581646354Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.582159403Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=512.919µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.583674068Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.584201766Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=527.498µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.585279153Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.585456725Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=177.332µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.586875109Z level=info msg="Executing migration" id="create test_data table" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.58734021Z level=info msg="Migration successfully executed" id="create test_data table" duration=464.799µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.588875524Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.589392059Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=516.256µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.590717991Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.5913534Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=635.459µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.592699188Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.593308509Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=607.989µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.594923462Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.595183678Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=259.635µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.596501003Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.596839476Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=338.232µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.597837675Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.597991523Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=153.877µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.599263043Z level=info msg="Executing migration" id="create team table" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.599703266Z level=info msg="Migration successfully executed" id="create team table" duration=440.083µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.601135838Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.601711164Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=575.086µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.603020475Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.603598627Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=578.053µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.60703095Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.608550033Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.518832ms 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.609757593Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.609965702Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=207.909µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.61131125Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.61185662Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=545.42µs 2026-03-10T14:52:47.803 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.613205685Z level=info msg="Executing migration" id="create team member table" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.613676587Z level=info msg="Migration successfully executed" id="create team member table" duration=470.661µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.61532386Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.615860593Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=536.673µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.617105254Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.617646046Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=540.912µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.618967789Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.6195404Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=572.511µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.621207932Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.622920106Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.711763ms 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.62411908Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.625707252Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.587911ms 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.626722502Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.628284565Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.562304ms 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.629727185Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.630250815Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=523.559µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.63161081Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.632144288Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=533.548µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.633465941Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.634067427Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=601.867µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.635913722Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.636459664Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=546.202µs 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.637912443Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-10T14:52:47.804 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.638450721Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=537.445µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.639874194Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.640423923Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=550.01µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.642212521Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.642856667Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=604.552µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.645544146Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.646167182Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=623.698µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.647615152Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.648052259Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=436.958µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.649512632Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.64975698Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=243.907µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.65095973Z level=info msg="Executing migration" id="create tag table" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.651439429Z level=info msg="Migration successfully executed" id="create tag table" duration=479.498µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.652931551Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.65345486Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=520.123µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.654748572Z level=info msg="Executing migration" id="create login attempt table" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.655345809Z level=info msg="Migration successfully executed" id="create login attempt table" duration=597.578µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.656685106Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.657304815Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=619.059µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.658872008Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.659445432Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=573.252µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.660696754Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.665944414Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=5.243473ms 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.667555089Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.668136095Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=581.327µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.669623519Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.670209025Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=585.236µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.67173993Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.672090786Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=352.629µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.673194672Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.673707712Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=511.838µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.675489698Z level=info msg="Executing migration" id="create user auth table" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.676060165Z level=info msg="Migration successfully executed" id="create user auth table" duration=570.687µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.677070656Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.67761768Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=546.863µs 2026-03-10T14:52:47.805 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.678958178Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.679111595Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=69.63µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.680847334Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.682582491Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.735037ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.683943278Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.685557609Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.61327ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.686703573Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.688725087Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=2.019921ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.695171661Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.69718508Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=2.014079ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.698752963Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.699462271Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=709.829µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.700804763Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.702663753Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.857968ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.70428682Z level=info msg="Executing migration" id="create server_lock table" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.70481584Z level=info msg="Migration successfully executed" id="create server_lock table" duration=531.785µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.706243652Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.706913586Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=670.154µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.708284772Z level=info msg="Executing migration" id="create user auth token table" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.70889782Z level=info msg="Migration successfully executed" id="create user auth token table" duration=611.385µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.710703599Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.711471537Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=768.368µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.712920257Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.713481778Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=561.721µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.714812008Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.715446785Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=634.877µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.717470403Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.719493498Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=2.021522ms 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.720907615Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.721529239Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=622.114µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.722931973Z level=info msg="Executing migration" id="create cache_data table" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.72351257Z level=info msg="Migration successfully executed" id="create cache_data table" duration=581.458µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.725159192Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.725771939Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=613.098µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.727063496Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.727628995Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=565.239µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.728916204Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.729555729Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=638.013µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.731116911Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.731150124Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=34.785µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.732401515Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.732548841Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=147.146µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.733609235Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.734143646Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=534.32µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.735714946Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.736298579Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=583.733µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.737527508Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.738066016Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=539.449µs 2026-03-10T14:52:47.806 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.739455527Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.739543351Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=88.036µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.740835649Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.741540158Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=704.259µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.743097372Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.743772045Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=674.613µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.745083118Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.745724929Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=641.882µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.747420482Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.748392352Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=973.102µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.749563243Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.75194488Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=2.381266ms 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.753324301Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.753992401Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=668.211µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.755529609Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.755713402Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=184.075µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.756819212Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.757437288Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=617.936µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.758392206Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.759084491Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=692.216µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.760574019Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.761218395Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=644.435µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.762318864Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.762350834Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=32.761µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.76396198Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.764646771Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=684.7µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.766151627Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.766782569Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=630.812µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.768260504Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.769054469Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=796.46µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.770661476Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.771422251Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=761.466µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.772787947Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.774737464Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.949688ms 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.776033961Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.776584101Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=550.56µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.777954926Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.778487623Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=532.727µs 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.779873196Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.788265051Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=8.385704ms 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.790241891Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-10T14:52:47.807 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.80133962Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=11.097438ms 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.803787682Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.804760712Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=975.276µs 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.806367219Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.807155163Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=786.311µs 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.808557277Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.811767565Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.208875ms 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.813056828Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.815615166Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.558187ms 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.81716702Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.817695819Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=528.418µs 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.818949926Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.819513782Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=563.566µs 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.827062729Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-10T14:52:48.052 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.827745778Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=682.378µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.829652286Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.830396609Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=743.962µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.831852183Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.832038281Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=185.848µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.833295293Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.83636563Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=3.065877ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.838252932Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.840403235Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=2.149342ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.841682019Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.843973197Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=2.289976ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.845329265Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.846058258Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=728.954µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.847765915Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.848377389Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=612.466µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.849474082Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.852007132Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=2.53276ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.853195026Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.855695595Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=2.500611ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.857241047Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.857947078Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=705.851µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.859368768Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.862108947Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.740629ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.863375537Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.866378707Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=3.00322ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.868885478Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.868997979Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=116.919µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.870538152Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.871265824Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=728.533µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.872668759Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.873273932Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=604.011µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.874754743Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.875409739Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=654.985µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.876599174Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.876632688Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=34.575µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.877623081Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.87953019Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.906356ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.881059231Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.883073311Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=2.013929ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.884282484Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.886192498Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.908712ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.887517558Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.889556804Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=2.037994ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.89101831Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.893020256Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.001465ms 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.894303467Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.894339495Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=36.718µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.895673611Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.896161203Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=487.522µs 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.897464943Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-10T14:52:48.053 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.899431114Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.965218ms 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.901098564Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.90113352Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=35.507µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.902259777Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.904514908Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=2.252576ms 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.905923814Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.906508508Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=585.185µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.908090649Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.910723487Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.631394ms 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.91205592Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.912509519Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=454.03µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.913689317Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.914258653Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=569.266µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.915807682Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.917761988Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.954668ms 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.918897273Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.919393252Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=495.067µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.920790557Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.921348191Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=559.367µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.922601806Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.9231113Z level=info msg="Migration successfully executed" id="create alert_image table" duration=509.314µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.924565081Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.925102897Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=537.906µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.926334211Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.926378906Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=45.234µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.927739271Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.928265165Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=523.499µs 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.929965318Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T14:52:48.054 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.93046931Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=504.374µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.931599666Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.931818746Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.932792348Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.933101336Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=309.659µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.934277737Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.934790477Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=512.699µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.935994821Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.937943107Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.948034ms 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.938962424Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.939530768Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=568.335µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.941124331Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.941622152Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=497.691µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.942717492Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.94321871Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=501.068µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.94471994Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.945298542Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=578.643µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.946411966Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.947018531Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=606.204µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.948191487Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.948204872Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=14.888µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.949329196Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.949366195Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=36.088µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.950900657Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.951131378Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=230.592µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.952250873Z level=info msg="Executing migration" id="create data_keys table" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.952788158Z level=info msg="Migration successfully executed" id="create data_keys table" duration=536.935µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.954107668Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.954529317Z level=info msg="Migration successfully executed" id="create secrets table" duration=419.786µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.95604305Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.96615799Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=10.111924ms 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.967787841Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.970150171Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.361989ms 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.971487504Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.97167249Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=185.367µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.972790482Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.983499213Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.704864ms 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.985360728Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.995537472Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.173358ms 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.997111499Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.997746436Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=624.629µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.999075944Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:47 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:47.999778078Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=702.534µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.001448936Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.001692932Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=245.138µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.002716268Z level=info msg="Executing migration" id="create permission table" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.003274582Z level=info msg="Migration successfully executed" id="create permission table" duration=558.324µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.004473866Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.005006903Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=533.458µs 2026-03-10T14:52:48.055 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.006535995Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.007098659Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=563.055µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.008374757Z level=info msg="Executing migration" id="create role table" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.008866306Z level=info msg="Migration successfully executed" id="create role table" duration=491.8µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.010112329Z level=info msg="Executing migration" id="add column display_name" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.01275263Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.641102ms 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.014507143Z level=info msg="Executing migration" id="add column group_name" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.016834338Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.326293ms 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.018247924Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.018803284Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=555.78µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.020191993Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.02068758Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=495.667µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.022218124Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.022725493Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=507.389µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.02385664Z level=info msg="Executing migration" id="create team role table" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.024269152Z level=info msg="Migration successfully executed" id="create team role table" duration=413.093µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.025313206Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.025823141Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=509.754µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.027322557Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.027840947Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=518.41µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.029021366Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.029481446Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=460.421µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.030532734Z level=info msg="Executing migration" id="create user role table" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.03095853Z level=info msg="Migration successfully executed" id="create user role table" duration=425.386µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.032419184Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.032897851Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=478.426µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.034025831Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.034504487Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=478.516µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.035603414Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.036154755Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=551.301µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.037732017Z level=info msg="Executing migration" id="create builtin role table" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.038195595Z level=info msg="Migration successfully executed" id="create builtin role table" duration=463.177µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.039430436Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.0399727Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=542.946µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.041140637Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.041672121Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=531.725µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.04319993Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.045562452Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.362121ms 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.046664464Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.047196389Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=531.945µs 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.048288022Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T14:52:48.056 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.048820117Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=530.072µs 2026-03-10T14:52:48.057 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.04993262Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-10T14:52:48.057 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.050421335Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=486.701µs 2026-03-10T14:52:48.057 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.051863494Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.057 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:47 vm03 bash[23394]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:47 vm00 bash[28403]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.868405+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.873981+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.879942+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.888067+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:47 vm00 bash[20726]: audit 2026-03-10T14:52:46.901817+0000 mon.a (mon.0) 816 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.052351938Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=488.493µs 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.05332058Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.053998679Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=678.309µs 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.055742713Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.056270361Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=527.668µs 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.057420092Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.059776232Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.355797ms 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.060878795Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.063153633Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.275829ms 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.064202085Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.06634246Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.140535ms 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.067876853Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.070197324Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.320051ms 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.071412939Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.071987686Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=574.837µs 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.07317651Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.07375357Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=576.82µs 2026-03-10T14:52:48.305 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.075317727Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.075840054Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=522.869µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.076873809Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.077299145Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=425.085µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.078514199Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.079060431Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=546.152µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.080247432Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.080275104Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=27.912µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.081254898Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.081281768Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=26.519µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.082321484Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.082599264Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=277.86µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.084146168Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.084461249Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=316.052µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.085450229Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.085790016Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=341.34µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.087025168Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.087148398Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=122.499µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.088357561Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.088714118Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=356.939µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.090025392Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.09056962Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=544.087µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.09182511Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.092547772Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=722.743µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.093961408Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.100614428Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.64777ms 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.110726272Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.111396327Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=673.581µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.113173913Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.114328825Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.156053ms 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.116147999Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.116856966Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=708.827µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.118142601Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.118797568Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=654.896µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.12005505Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.12270512Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.65006ms 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.124396094Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.125023428Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=627.213µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.126227061Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.126922042Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=695.362µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.128765862Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.135986516Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=7.220865ms 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.13732493Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.138048515Z level=info msg="Migration successfully executed" id="create correlation v2" duration=723.403µs 2026-03-10T14:52:48.306 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.139528825Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.140177248Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=648.583µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.141829079Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.142514252Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=685.273µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.143846514Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.144443823Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=597.308µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.145822834Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.146091697Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=269.013µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.147444188Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.148000969Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=556.341µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.148960055Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.151571262Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.611067ms 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.152705694Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.153193126Z level=info msg="Migration successfully executed" id="create entity_events table" duration=485.989µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.154566567Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.155202006Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=635.29µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.1564809Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.15685483Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.157882523Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.158196441Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.16008302Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.160569111Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=486.021µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.161652749Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.162212406Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=559.537µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.163564647Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.164152777Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=587.92µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.165705813Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.166401716Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=694.11µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.167597855Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.168316068Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=718.014µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.169398915Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.170034094Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=634.939µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.171700823Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.172193756Z level=info msg="Migration successfully executed" id="Drop public config table" duration=492.862µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.173276311Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.173901721Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=625.57µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.175613796Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.176324516Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=711.091µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.177457967Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.178159339Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=701.362µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.179623499Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.18029662Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=673.321µs 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.181588718Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.18961566Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.026982ms 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.190934228Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.194012007Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=3.077298ms 2026-03-10T14:52:48.307 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.195262958Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.197836655Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.573426ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.19936741Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.199584024Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=216.616µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.202208166Z level=info msg="Executing migration" id="add share column" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.208146209Z level=info msg="Migration successfully executed" id="add share column" duration=5.937873ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.209458275Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.209705096Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=245.359µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.211356857Z level=info msg="Executing migration" id="create file table" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.211937413Z level=info msg="Migration successfully executed" id="create file table" duration=580.446µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.213541185Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.214198205Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=656.669µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.215529345Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.21604482Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=515.444µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.217334273Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.217752397Z level=info msg="Migration successfully executed" id="create file_meta table" duration=417.873µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.219353012Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.219907559Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=554.446µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.221378121Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.221623701Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=247.694µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.222583217Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.222615286Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=32.04µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.22381974Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.224184803Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=365.114µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.225393396Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.225599722Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=206.175µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.226643946Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.227457629Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=813.232µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.228716034Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.232054983Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=3.337866ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.233518251Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.233734957Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=217.007µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.235308261Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.235980469Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=672.267µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.237275152Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.237604548Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=329.496µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.238579222Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.23878118Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=202.229µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.240370153Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.240706715Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=336.931µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.24182712Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.244703754Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.87447ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.246249396Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.249523664Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.270461ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.251059017Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.251719334Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=662.169µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.253403004Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.280359663Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=26.950777ms 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.285435622Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.286228967Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=797.432µs 2026-03-10T14:52:48.308 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.303717075Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.304909396Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.194776ms 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.311871204Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.325067822Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=13.195024ms 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.326982625Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.330847438Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=3.833575ms 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.332615726Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.332939152Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=323.525µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.334040964Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.334263Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=222.326µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.335580154Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.335819762Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=237.695µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.336971208Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.337198032Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=227.015µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.338780394Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.339064386Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=284.091µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.340695839Z level=info msg="Executing migration" id="create folder table" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.341269211Z level=info msg="Migration successfully executed" id="create folder table" duration=573.313µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.342210513Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.342894042Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=683.62µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.34437789Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.344986938Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=609.61µs 2026-03-10T14:52:48.566 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.346167768Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.34618447Z level=info msg="Migration successfully executed" id="Update folder title length" duration=14.647µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.347462992Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.348133548Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=671.306µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.349687907Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.350256841Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=569.416µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.351486172Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.35207812Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=591.017µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.353197264Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.353502235Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=304.841µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.354726465Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.354978919Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=252.364µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.356109344Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.356687426Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=577.953µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.357585106Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.358151346Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=566.37µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.359672352Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.360214887Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=542.775µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.361194581Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.361770879Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=576.147µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.362764258Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.363331149Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=567.041µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.364767477Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.365248537Z level=info msg="Migration successfully executed" id="create anon_device table" duration=481.33µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.366159853Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.366866425Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=706.381µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.368307121Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.368996261Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=690.342µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.370134171Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.37070024Z level=info msg="Migration successfully executed" id="create signing_key table" duration=565.949µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.371942174Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.37251133Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=567.333µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.373841118Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.374383944Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=543.036µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.375507457Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.375758316Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=251.09µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.376643874Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.380329019Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=3.68165ms 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.382062022Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.382546821Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=483.184µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.383651628Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.38425701Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=605.483µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.385470041Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.386060917Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=591.626µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.387494179Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.388055539Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=561.612µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.389107548Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.389732337Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=624.688µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.390677146Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.391275776Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=598.531µs 2026-03-10T14:52:48.567 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.392830235Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.393412545Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=582.11µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.394657555Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.395225127Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=568.052µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.396431605Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.396651246Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=219.901µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.397896627Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.397924479Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=28.224µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.398915193Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.401727516Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.810931ms 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.402992744Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.405799326Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.80586ms 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.407423676Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.407654477Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=230.622µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=migrator t=2026-03-10T14:52:48.408645973Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.367416245s 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore t=2026-03-10T14:52:48.409401387Z level=info msg="Created default organization" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=secrets t=2026-03-10T14:52:48.410580904Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=plugin.store t=2026-03-10T14:52:48.419163577Z level=info msg="Loading plugins..." 2026-03-10T14:52:48.568 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=local.finder t=2026-03-10T14:52:48.465519129Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=plugin.store t=2026-03-10T14:52:48.465541862Z level=info msg="Plugins loaded" count=55 duration=46.378546ms 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=query_data t=2026-03-10T14:52:48.469383972Z level=info msg="Query Service initialization" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=live.push_http t=2026-03-10T14:52:48.472035935Z level=info msg="Live Push Gateway initialization" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.migration t=2026-03-10T14:52:48.473886338Z level=info msg=Starting 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.migration t=2026-03-10T14:52:48.474226163Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.migration orgID=1 t=2026-03-10T14:52:48.474512139Z level=info msg="Migrating alerts for organisation" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.migration orgID=1 t=2026-03-10T14:52:48.474893723Z level=info msg="Alerts found to migrate" alerts=0 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.migration t=2026-03-10T14:52:48.475906168Z level=info msg="Completed alerting migration" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.state.manager t=2026-03-10T14:52:48.484873269Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=infra.usagestats.collector t=2026-03-10T14:52:48.485952168Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.datasources t=2026-03-10T14:52:48.487200504Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.datasources t=2026-03-10T14:52:48.493157563Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.alerting t=2026-03-10T14:52:48.499185273Z level=info msg="starting to provision alerting" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.alerting t=2026-03-10T14:52:48.499202256Z level=info msg="finished to provision alerting" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=http.server t=2026-03-10T14:52:48.500696201Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=http.server t=2026-03-10T14:52:48.501056245Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.state.manager t=2026-03-10T14:52:48.501266107Z level=info msg="Warming state cache for startup" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.state.manager t=2026-03-10T14:52:48.501654143Z level=info msg="State cache has been initialized" states=0 duration=387.695µs 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=grafanaStorageLogger t=2026-03-10T14:52:48.502375744Z level=info msg="Storage starting" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.dashboard t=2026-03-10T14:52:48.504946174Z level=info msg="starting to provision dashboards" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.multiorg.alertmanager t=2026-03-10T14:52:48.51926995Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ngalert.scheduler t=2026-03-10T14:52:48.519287904Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=ticker t=2026-03-10T14:52:48.519315546Z level=info msg=starting first_tick=2026-03-10T14:52:50Z 2026-03-10T14:52:48.568 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T14:52:48.553577034Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T14:52:48.566079803Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T14:52:48.580998112Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=plugins.update.checker t=2026-03-10T14:52:48.59289672Z level=info msg="Update check succeeded" duration=75.682446ms 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T14:52:48.612621082Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T14:52:48.629248179Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=secret.migration t=2026-03-10T14:52:48.634899927Z level=error msg="Server lock for secret migration already exists" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=provisioning.dashboard t=2026-03-10T14:52:48.648101332Z level=info msg="finished to provision dashboards" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=grafana-apiserver t=2026-03-10T14:52:48.774349128Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T14:52:48.873 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:52:48 vm03 bash[50670]: logger=grafana-apiserver t=2026-03-10T14:52:48.775062032Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T14:52:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:48 vm03 bash[23394]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:48 vm03 bash[23394]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:48 vm03 bash[23394]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:48 vm03 bash[23394]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:48 vm00 bash[28403]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:48 vm00 bash[28403]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:48 vm00 bash[28403]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:48 vm00 bash[28403]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:48 vm00 bash[20726]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:48 vm00 bash[20726]: cluster 2026-03-10T14:52:48.234276+0000 mgr.y (mgr.24425) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:48 vm00 bash[20726]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:49.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:48 vm00 bash[20726]: audit 2026-03-10T14:52:48.513416+0000 mgr.y (mgr.24425) 39 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:50.328 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:52:50.606 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:52:50.606 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":69,"fsid":"93bd26bc-1c8f-11f1-8404-610ce866bde7","created":"2026-03-10T14:45:32.261671+0000","modified":"2026-03-10T14:52:24.194023+0000","last_up_change":"2026-03-10T14:51:31.082469+0000","last_in_change":"2026-03-10T14:51:12.988931+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:48:38.744817+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T14:51:50.884049+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T14:51:52.809075+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T14:51:54.520244+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"67","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":67,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T14:51:54.951436+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"63","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T14:51:57.045848+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"65","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"c1ba9a14-6c50-4bf4-bfa2-935d1c099357","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1492812989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1492812989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1492812989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1492812989}]},"public_addr":"192.168.123.100:6801/1492812989","cluster_addr":"192.168.123.100:6802/1492812989","heartbeat_back_addr":"192.168.123.100:6804/1492812989","heartbeat_front_addr":"192.168.123.100:6803/1492812989","state":["exists","up"]},{"osd":1,"uuid":"d926117c-9bf7-44cb-8796-78132bdc13d6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":198852601}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":198852601}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":198852601}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":198852601}]},"public_addr":"192.168.123.100:6805/198852601","cluster_addr":"192.168.123.100:6806/198852601","heartbeat_back_addr":"192.168.123.100:6808/198852601","heartbeat_front_addr":"192.168.123.100:6807/198852601","state":["exists","up"]},{"osd":2,"uuid":"2ef814fa-4e2d-4d38-94de-a33c6dc06fe1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4087124508}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4087124508}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4087124508}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4087124508}]},"public_addr":"192.168.123.100:6809/4087124508","cluster_addr":"192.168.123.100:6810/4087124508","heartbeat_back_addr":"192.168.123.100:6812/4087124508","heartbeat_front_addr":"192.168.123.100:6811/4087124508","state":["exists","up"]},{"osd":3,"uuid":"536f0633-b026-45b8-8c47-eb23cccf9b64","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":1912373457}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1912373457}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1912373457}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":1912373457}]},"public_addr":"192.168.123.100:6813/1912373457","cluster_addr":"192.168.123.100:6814/1912373457","heartbeat_back_addr":"192.168.123.100:6816/1912373457","heartbeat_front_addr":"192.168.123.100:6815/1912373457","state":["exists","up"]},{"osd":4,"uuid":"d4924339-f850-475e-9859-ad7c6a3d2123","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":4249951776}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":4249951776}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":4249951776}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":4249951776}]},"public_addr":"192.168.123.103:6800/4249951776","cluster_addr":"192.168.123.103:6801/4249951776","heartbeat_back_addr":"192.168.123.103:6803/4249951776","heartbeat_front_addr":"192.168.123.103:6802/4249951776","state":["exists","up"]},{"osd":5,"uuid":"bb51bca8-ec91-4c05-94f6-3755aef22a35","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":39,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":413751251}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":413751251}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":413751251}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":413751251}]},"public_addr":"192.168.123.103:6804/413751251","cluster_addr":"192.168.123.103:6805/413751251","heartbeat_back_addr":"192.168.123.103:6807/413751251","heartbeat_front_addr":"192.168.123.103:6806/413751251","state":["exists","up"]},{"osd":6,"uuid":"d5d7abd1-1279-4f32-bce7-89f79446b2d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":61,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":2099210513}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":2099210513}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":2099210513}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2099210513}]},"public_addr":"192.168.123.103:6808/2099210513","cluster_addr":"192.168.123.103:6809/2099210513","heartbeat_back_addr":"192.168.123.103:6811/2099210513","heartbeat_front_addr":"192.168.123.103:6810/2099210513","state":["exists","up"]},{"osd":7,"uuid":"d982354a-c92b-452c-a8e1-997104ffd93b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":54,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1578983727}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":1578983727}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":1578983727}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1578983727}]},"public_addr":"192.168.123.103:6812/1578983727","cluster_addr":"192.168.123.103:6813/1578983727","heartbeat_back_addr":"192.168.123.103:6815/1578983727","heartbeat_front_addr":"192.168.123.103:6814/1578983727","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:27.007815+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:57.703056+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:48:34.391610+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:09.732879+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:44.517228+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:19.132079+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:52.571517+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:51:28.759784+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/739298523":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/1588560963":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/2823338908":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/3348435210":"2026-03-11T14:45:56.679254+0000","192.168.123.100:6800/2111612896":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/3470354970":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/1278955968":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/1250213560":"2026-03-11T14:52:24.193999+0000","192.168.123.100:6800/1843604239":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/3861326831":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/1279262325":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/1580298582":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/2029816203":"2026-03-11T14:52:24.193999+0000","192.168.123.100:6800/3353739459":"2026-03-11T14:52:24.193999+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:52:50.615 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:50 vm00 bash[28403]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.615 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:50 vm00 bash[28403]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:50 vm00 bash[20726]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.615 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:50 vm00 bash[20726]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:50 vm03 bash[23394]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:50 vm03 bash[23394]: audit 2026-03-10T14:52:49.316326+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:50.666 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T14:52:50.666 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd dump --format=json 2026-03-10T14:52:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:51 vm03 bash[23394]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:51 vm03 bash[23394]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:51 vm03 bash[23394]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:51 vm03 bash[23394]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:51 vm00 bash[28403]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:51 vm00 bash[28403]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:51 vm00 bash[28403]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:51 vm00 bash[28403]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:51 vm00 bash[20726]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:51 vm00 bash[20726]: cluster 2026-03-10T14:52:50.234561+0000 mgr.y (mgr.24425) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:51 vm00 bash[20726]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:51 vm00 bash[20726]: audit 2026-03-10T14:52:50.607047+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.100:0/238887303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:52.863 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 systemd[1]: Stopping Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:52.863 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:52 vm00 bash[28403]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:52.864 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[20726]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:52.910 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[55880]: ts=2026-03-10T14:52:52.864Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:51.778121+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:51.801026+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cluster 2026-03-10T14:52:52.235072+0000 mgr.y (mgr.24425) 41 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.285003+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.291647+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.293632+0000 mon.a (mon.0) 823 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.294225+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: audit 2026-03-10T14:52:52.299357+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cephadm 2026-03-10T14:52:52.311145+0000 mgr.y (mgr.24425) 42 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:52 vm03 bash[23394]: cephadm 2026-03-10T14:52:52.314014+0000 mgr.y (mgr.24425) 43 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm00 2026-03-10T14:52:53.215 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 bash[56631]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-alertmanager-a 2026-03-10T14:52:53.216 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@alertmanager.a.service: Deactivated successfully. 2026-03-10T14:52:53.216 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 systemd[1]: Stopped Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:53.216 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:52 vm00 systemd[1]: Started Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.313Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.313Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.314Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.100 port=9094 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.319Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.336Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.336Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.338Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T14:52:53.584 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:53 vm00 bash[56709]: ts=2026-03-10T14:52:53.338Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T14:52:53.966 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:52:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 systemd[1]: Stopping Ceph prometheus.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.062Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.063Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T14:52:54.077 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.063Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T14:52:54.078 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.063Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T14:52:54.078 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.065Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T14:52:54.078 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.065Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T14:52:54.078 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[49425]: ts=2026-03-10T14:52:54.065Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.333 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:54 vm03 bash[23394]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51234]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-prometheus-a 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@prometheus.a.service: Deactivated successfully. 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 systemd[1]: Stopped Ceph prometheus.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 systemd[1]: Started Ceph prometheus.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.291Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.291Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.291Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm03 (none))" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.291Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.291Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.292Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.293Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.298Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.298Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.308Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T14:52:54.333 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.308Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.945µs 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.308Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.309Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.309Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.309Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=74.068µs wal_replay_duration=455.834µs wbl_replay_duration=201ns total_replay_duration=800.849µs 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.311Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.311Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.311Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.332Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=20.969343ms db_storage=882ns remote_storage=771ns web_handler=392ns query_engine=301ns scrape=737.37µs scrape_sd=68.177µs notify=7.054µs notify_sd=4.729µs rules=19.779205ms tracing=5.4µs 2026-03-10T14:52:54.334 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.332Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:54 vm00 bash[28403]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: audit 2026-03-10T14:52:53.073503+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: audit 2026-03-10T14:52:53.206657+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: cephadm 2026-03-10T14:52:53.207912+0000 mgr.y (mgr.24425) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:54 vm00 bash[20726]: cephadm 2026-03-10T14:52:53.401018+0000 mgr.y (mgr.24425) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm03 2026-03-10T14:52:54.466 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPING 2026-03-10T14:52:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:52:54 vm03 bash[51311]: ts=2026-03-10T14:52:54.332Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPED 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTING 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Serving on http://:::9283 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTED 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPING 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPED 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTING 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Serving on http://:::9283 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTED 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPING 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STOPPED 2026-03-10T14:52:54.899 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTING 2026-03-10T14:52:55.183 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Serving on http://:::9283 2026-03-10T14:52:55.183 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:52:54 vm00 bash[21005]: [10/Mar/2026:14:52:54] ENGINE Bus STARTED 2026-03-10T14:52:55.371 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:52:55.455 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[56709]: ts=2026-03-10T14:52:55.323Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000851644s 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.455 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:55 vm00 bash[28403]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.456 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.457 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.457 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:55 vm00 bash[20726]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.624 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:52:55.624 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":69,"fsid":"93bd26bc-1c8f-11f1-8404-610ce866bde7","created":"2026-03-10T14:45:32.261671+0000","modified":"2026-03-10T14:52:24.194023+0000","last_up_change":"2026-03-10T14:51:31.082469+0000","last_in_change":"2026-03-10T14:51:12.988931+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:48:38.744817+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T14:51:50.884049+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"59","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T14:51:52.809075+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"61","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T14:51:54.520244+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"67","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":67,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T14:51:54.951436+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"63","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T14:51:57.045848+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"65","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"c1ba9a14-6c50-4bf4-bfa2-935d1c099357","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6801","nonce":1492812989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":1492812989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":1492812989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":1492812989}]},"public_addr":"192.168.123.100:6801/1492812989","cluster_addr":"192.168.123.100:6802/1492812989","heartbeat_back_addr":"192.168.123.100:6804/1492812989","heartbeat_front_addr":"192.168.123.100:6803/1492812989","state":["exists","up"]},{"osd":1,"uuid":"d926117c-9bf7-44cb-8796-78132bdc13d6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6805","nonce":198852601}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":198852601}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":198852601}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":198852601}]},"public_addr":"192.168.123.100:6805/198852601","cluster_addr":"192.168.123.100:6806/198852601","heartbeat_back_addr":"192.168.123.100:6808/198852601","heartbeat_front_addr":"192.168.123.100:6807/198852601","state":["exists","up"]},{"osd":2,"uuid":"2ef814fa-4e2d-4d38-94de-a33c6dc06fe1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6809","nonce":4087124508}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":4087124508}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":4087124508}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":4087124508}]},"public_addr":"192.168.123.100:6809/4087124508","cluster_addr":"192.168.123.100:6810/4087124508","heartbeat_back_addr":"192.168.123.100:6812/4087124508","heartbeat_front_addr":"192.168.123.100:6811/4087124508","state":["exists","up"]},{"osd":3,"uuid":"536f0633-b026-45b8-8c47-eb23cccf9b64","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6813","nonce":1912373457}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1912373457}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1912373457}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":1912373457}]},"public_addr":"192.168.123.100:6813/1912373457","cluster_addr":"192.168.123.100:6814/1912373457","heartbeat_back_addr":"192.168.123.100:6816/1912373457","heartbeat_front_addr":"192.168.123.100:6815/1912373457","state":["exists","up"]},{"osd":4,"uuid":"d4924339-f850-475e-9859-ad7c6a3d2123","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":4249951776}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":4249951776}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":4249951776}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":4249951776}]},"public_addr":"192.168.123.103:6800/4249951776","cluster_addr":"192.168.123.103:6801/4249951776","heartbeat_back_addr":"192.168.123.103:6803/4249951776","heartbeat_front_addr":"192.168.123.103:6802/4249951776","state":["exists","up"]},{"osd":5,"uuid":"bb51bca8-ec91-4c05-94f6-3755aef22a35","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":39,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":413751251}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":413751251}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":413751251}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":413751251}]},"public_addr":"192.168.123.103:6804/413751251","cluster_addr":"192.168.123.103:6805/413751251","heartbeat_back_addr":"192.168.123.103:6807/413751251","heartbeat_front_addr":"192.168.123.103:6806/413751251","state":["exists","up"]},{"osd":6,"uuid":"d5d7abd1-1279-4f32-bce7-89f79446b2d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":61,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":2099210513}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":2099210513}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":2099210513}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":2099210513}]},"public_addr":"192.168.123.103:6808/2099210513","cluster_addr":"192.168.123.103:6809/2099210513","heartbeat_back_addr":"192.168.123.103:6811/2099210513","heartbeat_front_addr":"192.168.123.103:6810/2099210513","state":["exists","up"]},{"osd":7,"uuid":"d982354a-c92b-452c-a8e1-997104ffd93b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":54,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1578983727}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":1578983727}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":1578983727}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1578983727}]},"public_addr":"192.168.123.103:6812/1578983727","cluster_addr":"192.168.123.103:6813/1578983727","heartbeat_back_addr":"192.168.123.103:6815/1578983727","heartbeat_front_addr":"192.168.123.103:6814/1578983727","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:27.007815+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:47:57.703056+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:48:34.391610+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:09.732879+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:49:44.517228+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:19.132079+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:50:52.571517+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:51:28.759784+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/739298523":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/1588560963":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/2823338908":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/3348435210":"2026-03-11T14:45:56.679254+0000","192.168.123.100:6800/2111612896":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/3470354970":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/1278955968":"2026-03-11T14:45:45.658909+0000","192.168.123.100:0/1250213560":"2026-03-11T14:52:24.193999+0000","192.168.123.100:6800/1843604239":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/3861326831":"2026-03-11T14:45:56.679254+0000","192.168.123.100:0/1279262325":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/1580298582":"2026-03-11T14:52:24.193999+0000","192.168.123.100:0/2029816203":"2026-03-11T14:52:24.193999+0000","192.168.123.100:6800/3353739459":"2026-03-11T14:52:24.193999+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.183723+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.187680+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.191234+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.191477+0000 mgr.y (mgr.24425) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.192027+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.192167+0000 mgr.y (mgr.24425) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.194830+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.202510+0000 mon.a (mon.0) 833 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.202694+0000 mgr.y (mgr.24425) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.203145+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.203300+0000 mgr.y (mgr.24425) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.208322+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.216522+0000 mon.a (mon.0) 836 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.216726+0000 mgr.y (mgr.24425) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.217553+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.217725+0000 mgr.y (mgr.24425) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm03.local:9095"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.221850+0000 mon.a (mon.0) 838 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: cluster 2026-03-10T14:52:54.235297+0000 mgr.y (mgr.24425) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.257433+0000 mon.a (mon.0) 839 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:55 vm03 bash[23394]: audit 2026-03-10T14:52:54.322940+0000 mon.a (mon.0) 840 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:52:55.683 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.0 flush_pg_stats 2026-03-10T14:52:55.683 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.1 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.2 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.3 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.4 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.5 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.6 flush_pg_stats 2026-03-10T14:52:55.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph tell osd.7 flush_pg_stats 2026-03-10T14:52:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:56 vm03 bash[23394]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:56.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:56 vm03 bash[23394]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:56.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:56 vm00 bash[28403]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:56.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:56 vm00 bash[28403]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:56.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:56 vm00 bash[20726]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:56.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:56 vm00 bash[20726]: audit 2026-03-10T14:52:55.625796+0000 mon.a (mon.0) 841 : audit [DBG] from='client.? 192.168.123.100:0/3123715040' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:52:57.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:57 vm03 bash[23394]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:57.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:57 vm03 bash[23394]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:57.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:57 vm00 bash[28403]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:57.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:57 vm00 bash[28403]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:57.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:57 vm00 bash[20726]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:57.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:57 vm00 bash[20726]: cluster 2026-03-10T14:52:56.235724+0000 mgr.y (mgr.24425) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:52:58.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:52:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:52:59 vm03 bash[23394]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:52:59 vm00 bash[28403]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: cluster 2026-03-10T14:52:58.236071+0000 mgr.y (mgr.24425) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.515358+0000 mgr.y (mgr.24425) 55 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.806886+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:52:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:52:59 vm00 bash[20726]: audit 2026-03-10T14:52:58.813444+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.425 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.427 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.427 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.429 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.434 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.434 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.437 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.437 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:00.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:00 vm00 bash[28403]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:00 vm00 bash[20726]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:00.905 INFO:teuthology.orchestra.run.vm00.stdout:137438953512 2026-03-10T14:53:00.905 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.4 2026-03-10T14:53:00.984 INFO:teuthology.orchestra.run.vm00.stdout:201863462940 2026-03-10T14:53:00.984 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.6 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.669581+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.677769+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.679248+0000 mon.a (mon.0) 846 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.679852+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:00 vm03 bash[23394]: audit 2026-03-10T14:52:59.686960+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:53:01.133 INFO:teuthology.orchestra.run.vm00.stdout:111669149743 2026-03-10T14:53:01.133 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.3 2026-03-10T14:53:01.385 INFO:teuthology.orchestra.run.vm00.stdout:34359738435 2026-03-10T14:53:01.385 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.0 2026-03-10T14:53:01.417 INFO:teuthology.orchestra.run.vm00.stdout:55834574909 2026-03-10T14:53:01.417 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.1 2026-03-10T14:53:01.518 INFO:teuthology.orchestra.run.vm00.stdout:167503724577 2026-03-10T14:53:01.518 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.5 2026-03-10T14:53:01.535 INFO:teuthology.orchestra.run.vm00.stdout:231928234003 2026-03-10T14:53:01.535 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.7 2026-03-10T14:53:01.541 INFO:teuthology.orchestra.run.vm00.stdout:77309411382 2026-03-10T14:53:01.541 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph osd last-stat-seq osd.2 2026-03-10T14:53:01.688 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:01 vm00 bash[20726]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:01.689 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:01 vm00 bash[20726]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:01 vm00 bash[28403]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:01.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:01 vm00 bash[28403]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:01 vm03 bash[23394]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:01 vm03 bash[23394]: cluster 2026-03-10T14:53:00.236465+0000 mgr.y (mgr.24425) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:03.715 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 14:53:03 vm00 bash[56709]: ts=2026-03-10T14:53:03.325Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002555447s 2026-03-10T14:53:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:03 vm03 bash[23394]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:03 vm03 bash[23394]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:03 vm00 bash[28403]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:03 vm00 bash[28403]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:03 vm00 bash[20726]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:03 vm00 bash[20726]: cluster 2026-03-10T14:53:02.236901+0000 mgr.y (mgr.24425) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:04.216 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:05.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:04 vm03 bash[23394]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:04 vm03 bash[23394]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:04 vm00 bash[28403]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:04 vm00 bash[28403]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:04 vm00 bash[20726]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:04 vm00 bash[20726]: cluster 2026-03-10T14:53:04.237266+0000 mgr.y (mgr.24425) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T14:53:05.828 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.828 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.830 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.831 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.833 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.834 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.835 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:05.837 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:06.580 INFO:teuthology.orchestra.run.vm00.stdout:111669149744 2026-03-10T14:53:06.648 INFO:teuthology.orchestra.run.vm00.stdout:137438953513 2026-03-10T14:53:06.793 INFO:teuthology.orchestra.run.vm00.stdout:231928234004 2026-03-10T14:53:06.796 INFO:teuthology.orchestra.run.vm00.stdout:77309411383 2026-03-10T14:53:06.811 INFO:teuthology.orchestra.run.vm00.stdout:167503724578 2026-03-10T14:53:06.831 INFO:teuthology.orchestra.run.vm00.stdout:34359738436 2026-03-10T14:53:06.839 INFO:teuthology.orchestra.run.vm00.stdout:201863462941 2026-03-10T14:53:06.874 INFO:teuthology.orchestra.run.vm00.stdout:55834574910 2026-03-10T14:53:06.885 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149743 got 111669149744 for osd.3 2026-03-10T14:53:06.885 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:06.937 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953512 got 137438953513 for osd.4 2026-03-10T14:53:06.937 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:06.979 INFO:tasks.cephadm.ceph_manager.ceph:need seq 167503724577 got 167503724578 for osd.5 2026-03-10T14:53:06.979 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.025 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411382 got 77309411383 for osd.2 2026-03-10T14:53:07.026 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.032 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738435 got 34359738436 for osd.0 2026-03-10T14:53:07.033 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.067 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574909 got 55834574910 for osd.1 2026-03-10T14:53:07.067 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.068 INFO:tasks.cephadm.ceph_manager.ceph:need seq 231928234003 got 231928234004 for osd.7 2026-03-10T14:53:07.068 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.077 INFO:tasks.cephadm.ceph_manager.ceph:need seq 201863462940 got 201863462941 for osd.6 2026-03-10T14:53:07.078 DEBUG:teuthology.parallel:result is None 2026-03-10T14:53:07.078 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T14:53:07.078 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph pg dump --format=json 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:07 vm00 bash[28403]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:07.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:07 vm00 bash[20726]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: cluster 2026-03-10T14:53:06.237776+0000 mgr.y (mgr.24425) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.578976+0000 mon.a (mon.0) 849 : audit [DBG] from='client.? 192.168.123.100:0/2495922687' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.648021+0000 mon.a (mon.0) 850 : audit [DBG] from='client.? 192.168.123.100:0/2112917403' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.793135+0000 mon.a (mon.0) 851 : audit [DBG] from='client.? 192.168.123.100:0/482336223' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.797707+0000 mon.c (mon.2) 22 : audit [DBG] from='client.? 192.168.123.100:0/3184579833' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.813640+0000 mon.a (mon.0) 852 : audit [DBG] from='client.? 192.168.123.100:0/2950133323' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.829491+0000 mon.c (mon.2) 23 : audit [DBG] from='client.? 192.168.123.100:0/2465169167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.839381+0000 mon.a (mon.0) 853 : audit [DBG] from='client.? 192.168.123.100:0/1304036835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:07 vm03 bash[23394]: audit 2026-03-10T14:53:06.874147+0000 mon.a (mon.0) 854 : audit [DBG] from='client.? 192.168.123.100:0/1239818718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:53:08.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:09.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:09 vm00 bash[28403]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:09.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:09 vm00 bash[20726]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: cluster 2026-03-10T14:53:08.238047+0000 mgr.y (mgr.24425) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: audit 2026-03-10T14:53:08.521383+0000 mgr.y (mgr.24425) 61 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:09 vm03 bash[23394]: audit 2026-03-10T14:53:09.328934+0000 mon.a (mon.0) 855 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:11.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:11 vm00 bash[28403]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:11.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:11 vm00 bash[28403]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:11.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:11 vm00 bash[20726]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:11.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:11 vm00 bash[20726]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:11.766 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:11 vm03 bash[23394]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:11 vm03 bash[23394]: cluster 2026-03-10T14:53:10.238453+0000 mgr.y (mgr.24425) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:12.026 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:53:12.027 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T14:53:12.089 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":26,"stamp":"2026-03-10T14:53:10.238192+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":911,"num_read_kb":770,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221480,"kb_used_data":6780,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517912,"statfs":{"total":171765137408,"available":171538341888,"internally_reserved":0,"allocated":6942720,"data_stored":3495745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":15,"num_read_kb":15,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002302"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337947+0000","last_change":"2026-03-10T14:51:59.510313+0000","last_active":"2026-03-10T14:52:24.337947+0000","last_peered":"2026-03-10T14:52:24.337947+0000","last_clean":"2026-03-10T14:52:24.337947+0000","last_became_active":"2026-03-10T14:51:59.510232+0000","last_became_peered":"2026-03-10T14:51:59.510232+0000","last_unstale":"2026-03-10T14:52:24.337947+0000","last_undegraded":"2026-03-10T14:52:24.337947+0000","last_fullsized":"2026-03-10T14:52:24.337947+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:03:24.096221+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213792+0000","last_change":"2026-03-10T14:51:52.748444+0000","last_active":"2026-03-10T14:52:24.213792+0000","last_peered":"2026-03-10T14:52:24.213792+0000","last_clean":"2026-03-10T14:52:24.213792+0000","last_became_active":"2026-03-10T14:51:52.748359+0000","last_became_peered":"2026-03-10T14:51:52.748359+0000","last_unstale":"2026-03-10T14:52:24.213792+0000","last_undegraded":"2026-03-10T14:52:24.213792+0000","last_fullsized":"2026-03-10T14:52:24.213792+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:13:48.867889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.338107+0000","last_change":"2026-03-10T14:51:54.924877+0000","last_active":"2026-03-10T14:52:24.338107+0000","last_peered":"2026-03-10T14:52:24.338107+0000","last_clean":"2026-03-10T14:52:24.338107+0000","last_became_active":"2026-03-10T14:51:54.924720+0000","last_became_peered":"2026-03-10T14:51:54.924720+0000","last_unstale":"2026-03-10T14:52:24.338107+0000","last_undegraded":"2026-03-10T14:52:24.338107+0000","last_fullsized":"2026-03-10T14:52:24.338107+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:47:34.514256+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329046+0000","last_change":"2026-03-10T14:51:56.944363+0000","last_active":"2026-03-10T14:52:24.329046+0000","last_peered":"2026-03-10T14:52:24.329046+0000","last_clean":"2026-03-10T14:52:24.329046+0000","last_became_active":"2026-03-10T14:51:56.943948+0000","last_became_peered":"2026-03-10T14:51:56.943948+0000","last_unstale":"2026-03-10T14:52:24.329046+0000","last_undegraded":"2026-03-10T14:52:24.329046+0000","last_fullsized":"2026-03-10T14:52:24.329046+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:33:25.301504+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337412+0000","last_change":"2026-03-10T14:51:52.731392+0000","last_active":"2026-03-10T14:52:24.337412+0000","last_peered":"2026-03-10T14:52:24.337412+0000","last_clean":"2026-03-10T14:52:24.337412+0000","last_became_active":"2026-03-10T14:51:52.731313+0000","last_became_peered":"2026-03-10T14:51:52.731313+0000","last_unstale":"2026-03-10T14:52:24.337412+0000","last_undegraded":"2026-03-10T14:52:24.337412+0000","last_fullsized":"2026-03-10T14:52:24.337412+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:21:58.842652+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214288+0000","last_change":"2026-03-10T14:51:54.926737+0000","last_active":"2026-03-10T14:52:24.214288+0000","last_peered":"2026-03-10T14:52:24.214288+0000","last_clean":"2026-03-10T14:52:24.214288+0000","last_became_active":"2026-03-10T14:51:54.926418+0000","last_became_peered":"2026-03-10T14:51:54.926418+0000","last_unstale":"2026-03-10T14:52:24.214288+0000","last_undegraded":"2026-03-10T14:52:24.214288+0000","last_fullsized":"2026-03-10T14:52:24.214288+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:42:57.575249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212742+0000","last_change":"2026-03-10T14:51:56.946648+0000","last_active":"2026-03-10T14:52:24.212742+0000","last_peered":"2026-03-10T14:52:24.212742+0000","last_clean":"2026-03-10T14:52:24.212742+0000","last_became_active":"2026-03-10T14:51:56.946448+0000","last_became_peered":"2026-03-10T14:51:56.946448+0000","last_unstale":"2026-03-10T14:52:24.212742+0000","last_undegraded":"2026-03-10T14:52:24.212742+0000","last_fullsized":"2026-03-10T14:52:24.212742+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:27:44.047187+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328528+0000","last_change":"2026-03-10T14:51:58.950929+0000","last_active":"2026-03-10T14:52:24.328528+0000","last_peered":"2026-03-10T14:52:24.328528+0000","last_clean":"2026-03-10T14:52:24.328528+0000","last_became_active":"2026-03-10T14:51:58.950758+0000","last_became_peered":"2026-03-10T14:51:58.950758+0000","last_unstale":"2026-03-10T14:52:24.328528+0000","last_undegraded":"2026-03-10T14:52:24.328528+0000","last_fullsized":"2026-03-10T14:52:24.328528+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:13:31.981592+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.262966+0000","last_change":"2026-03-10T14:51:52.749960+0000","last_active":"2026-03-10T14:52:25.262966+0000","last_peered":"2026-03-10T14:52:25.262966+0000","last_clean":"2026-03-10T14:52:25.262966+0000","last_became_active":"2026-03-10T14:51:52.749751+0000","last_became_peered":"2026-03-10T14:51:52.749751+0000","last_unstale":"2026-03-10T14:52:25.262966+0000","last_undegraded":"2026-03-10T14:52:25.262966+0000","last_fullsized":"2026-03-10T14:52:25.262966+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:38:35.842261+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211342+0000","last_change":"2026-03-10T14:51:54.930411+0000","last_active":"2026-03-10T14:52:24.211342+0000","last_peered":"2026-03-10T14:52:24.211342+0000","last_clean":"2026-03-10T14:52:24.211342+0000","last_became_active":"2026-03-10T14:51:54.930337+0000","last_became_peered":"2026-03-10T14:51:54.930337+0000","last_unstale":"2026-03-10T14:52:24.211342+0000","last_undegraded":"2026-03-10T14:52:24.211342+0000","last_fullsized":"2026-03-10T14:52:24.211342+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:42:25.792907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264391+0000","last_change":"2026-03-10T14:51:56.948720+0000","last_active":"2026-03-10T14:52:25.264391+0000","last_peered":"2026-03-10T14:52:25.264391+0000","last_clean":"2026-03-10T14:52:25.264391+0000","last_became_active":"2026-03-10T14:51:56.948586+0000","last_became_peered":"2026-03-10T14:51:56.948586+0000","last_unstale":"2026-03-10T14:52:25.264391+0000","last_undegraded":"2026-03-10T14:52:25.264391+0000","last_fullsized":"2026-03-10T14:52:25.264391+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:59:29.559689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211315+0000","last_change":"2026-03-10T14:51:58.952934+0000","last_active":"2026-03-10T14:52:24.211315+0000","last_peered":"2026-03-10T14:52:24.211315+0000","last_clean":"2026-03-10T14:52:24.211315+0000","last_became_active":"2026-03-10T14:51:58.952483+0000","last_became_peered":"2026-03-10T14:51:58.952483+0000","last_unstale":"2026-03-10T14:52:24.211315+0000","last_undegraded":"2026-03-10T14:52:24.211315+0000","last_fullsized":"2026-03-10T14:52:24.211315+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:22:30.851807+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263077+0000","last_change":"2026-03-10T14:51:52.745241+0000","last_active":"2026-03-10T14:52:25.263077+0000","last_peered":"2026-03-10T14:52:25.263077+0000","last_clean":"2026-03-10T14:52:25.263077+0000","last_became_active":"2026-03-10T14:51:52.741320+0000","last_became_peered":"2026-03-10T14:51:52.741320+0000","last_unstale":"2026-03-10T14:52:25.263077+0000","last_undegraded":"2026-03-10T14:52:25.263077+0000","last_fullsized":"2026-03-10T14:52:25.263077+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:11:20.736130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"65'12","reported_seq":52,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210550+0000","last_change":"2026-03-10T14:51:54.933735+0000","last_active":"2026-03-10T14:52:24.210550+0000","last_peered":"2026-03-10T14:52:24.210550+0000","last_clean":"2026-03-10T14:52:24.210550+0000","last_became_active":"2026-03-10T14:51:54.933207+0000","last_became_peered":"2026-03-10T14:51:54.933207+0000","last_unstale":"2026-03-10T14:52:24.210550+0000","last_undegraded":"2026-03-10T14:52:24.210550+0000","last_fullsized":"2026-03-10T14:52:24.210550+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:27:21.133155+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210496+0000","last_change":"2026-03-10T14:51:56.939450+0000","last_active":"2026-03-10T14:52:24.210496+0000","last_peered":"2026-03-10T14:52:24.210496+0000","last_clean":"2026-03-10T14:52:24.210496+0000","last_became_active":"2026-03-10T14:51:56.939338+0000","last_became_peered":"2026-03-10T14:51:56.939338+0000","last_unstale":"2026-03-10T14:52:24.210496+0000","last_undegraded":"2026-03-10T14:52:24.210496+0000","last_fullsized":"2026-03-10T14:52:24.210496+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:22:14.360831+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213696+0000","last_change":"2026-03-10T14:51:58.945479+0000","last_active":"2026-03-10T14:52:24.213696+0000","last_peered":"2026-03-10T14:52:24.213696+0000","last_clean":"2026-03-10T14:52:24.213696+0000","last_became_active":"2026-03-10T14:51:58.945393+0000","last_became_peered":"2026-03-10T14:51:58.945393+0000","last_unstale":"2026-03-10T14:52:24.213696+0000","last_undegraded":"2026-03-10T14:52:24.213696+0000","last_fullsized":"2026-03-10T14:52:24.213696+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:09:12.837014+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"65'19","reported_seq":60,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243184+0000","last_change":"2026-03-10T14:51:54.941869+0000","last_active":"2026-03-10T14:52:25.243184+0000","last_peered":"2026-03-10T14:52:25.243184+0000","last_clean":"2026-03-10T14:52:25.243184+0000","last_became_active":"2026-03-10T14:51:54.941693+0000","last_became_peered":"2026-03-10T14:51:54.941693+0000","last_unstale":"2026-03-10T14:52:25.243184+0000","last_undegraded":"2026-03-10T14:52:25.243184+0000","last_fullsized":"2026-03-10T14:52:25.243184+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:37:09.120561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263195+0000","last_change":"2026-03-10T14:51:52.746413+0000","last_active":"2026-03-10T14:52:25.263195+0000","last_peered":"2026-03-10T14:52:25.263195+0000","last_clean":"2026-03-10T14:52:25.263195+0000","last_became_active":"2026-03-10T14:51:52.746308+0000","last_became_peered":"2026-03-10T14:51:52.746308+0000","last_unstale":"2026-03-10T14:52:25.263195+0000","last_undegraded":"2026-03-10T14:52:25.263195+0000","last_fullsized":"2026-03-10T14:52:25.263195+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:13.250651+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212670+0000","last_change":"2026-03-10T14:51:56.948114+0000","last_active":"2026-03-10T14:52:24.212670+0000","last_peered":"2026-03-10T14:52:24.212670+0000","last_clean":"2026-03-10T14:52:24.212670+0000","last_became_active":"2026-03-10T14:51:56.947831+0000","last_became_peered":"2026-03-10T14:51:56.947831+0000","last_unstale":"2026-03-10T14:52:24.212670+0000","last_undegraded":"2026-03-10T14:52:24.212670+0000","last_fullsized":"2026-03-10T14:52:24.212670+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:14:03.965576+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194790+0000","last_change":"2026-03-10T14:51:58.948870+0000","last_active":"2026-03-10T14:52:25.194790+0000","last_peered":"2026-03-10T14:52:25.194790+0000","last_clean":"2026-03-10T14:52:25.194790+0000","last_became_active":"2026-03-10T14:51:58.948654+0000","last_became_peered":"2026-03-10T14:51:58.948654+0000","last_unstale":"2026-03-10T14:52:25.194790+0000","last_undegraded":"2026-03-10T14:52:25.194790+0000","last_fullsized":"2026-03-10T14:52:25.194790+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:48:58.479864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337525+0000","last_change":"2026-03-10T14:51:54.930764+0000","last_active":"2026-03-10T14:52:24.337525+0000","last_peered":"2026-03-10T14:52:24.337525+0000","last_clean":"2026-03-10T14:52:24.337525+0000","last_became_active":"2026-03-10T14:51:54.930199+0000","last_became_peered":"2026-03-10T14:51:54.930199+0000","last_unstale":"2026-03-10T14:52:24.337525+0000","last_undegraded":"2026-03-10T14:52:24.337525+0000","last_fullsized":"2026-03-10T14:52:24.337525+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:08.203364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211853+0000","last_change":"2026-03-10T14:51:52.748094+0000","last_active":"2026-03-10T14:52:24.211853+0000","last_peered":"2026-03-10T14:52:24.211853+0000","last_clean":"2026-03-10T14:52:24.211853+0000","last_became_active":"2026-03-10T14:51:52.748012+0000","last_became_peered":"2026-03-10T14:51:52.748012+0000","last_unstale":"2026-03-10T14:52:24.211853+0000","last_undegraded":"2026-03-10T14:52:24.211853+0000","last_fullsized":"2026-03-10T14:52:24.211853+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:59.562570+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"65'11","reported_seq":50,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.793379+0000","last_change":"2026-03-10T14:51:56.941162+0000","last_active":"2026-03-10T14:52:57.793379+0000","last_peered":"2026-03-10T14:52:57.793379+0000","last_clean":"2026-03-10T14:52:57.793379+0000","last_became_active":"2026-03-10T14:51:56.940757+0000","last_became_peered":"2026-03-10T14:51:56.940757+0000","last_unstale":"2026-03-10T14:52:57.793379+0000","last_undegraded":"2026-03-10T14:52:57.793379+0000","last_fullsized":"2026-03-10T14:52:57.793379+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:53:46.217768+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329820+0000","last_change":"2026-03-10T14:51:58.942231+0000","last_active":"2026-03-10T14:52:24.329820+0000","last_peered":"2026-03-10T14:52:24.329820+0000","last_clean":"2026-03-10T14:52:24.329820+0000","last_became_active":"2026-03-10T14:51:58.941991+0000","last_became_peered":"2026-03-10T14:51:58.941991+0000","last_unstale":"2026-03-10T14:52:24.329820+0000","last_undegraded":"2026-03-10T14:52:24.329820+0000","last_fullsized":"2026-03-10T14:52:24.329820+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:56:44.140614+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337580+0000","last_change":"2026-03-10T14:51:54.919825+0000","last_active":"2026-03-10T14:52:24.337580+0000","last_peered":"2026-03-10T14:52:24.337580+0000","last_clean":"2026-03-10T14:52:24.337580+0000","last_became_active":"2026-03-10T14:51:54.919644+0000","last_became_peered":"2026-03-10T14:51:54.919644+0000","last_unstale":"2026-03-10T14:52:24.337580+0000","last_undegraded":"2026-03-10T14:52:24.337580+0000","last_fullsized":"2026-03-10T14:52:24.337580+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:42:44.634847+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211884+0000","last_change":"2026-03-10T14:51:52.743088+0000","last_active":"2026-03-10T14:52:24.211884+0000","last_peered":"2026-03-10T14:52:24.211884+0000","last_clean":"2026-03-10T14:52:24.211884+0000","last_became_active":"2026-03-10T14:51:52.742955+0000","last_became_peered":"2026-03-10T14:51:52.742955+0000","last_unstale":"2026-03-10T14:52:24.211884+0000","last_undegraded":"2026-03-10T14:52:24.211884+0000","last_fullsized":"2026-03-10T14:52:24.211884+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:37:32.602079+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791188+0000","last_change":"2026-03-10T14:51:56.945138+0000","last_active":"2026-03-10T14:53:02.791188+0000","last_peered":"2026-03-10T14:53:02.791188+0000","last_clean":"2026-03-10T14:53:02.791188+0000","last_became_active":"2026-03-10T14:51:56.944465+0000","last_became_peered":"2026-03-10T14:51:56.944465+0000","last_unstale":"2026-03-10T14:53:02.791188+0000","last_undegraded":"2026-03-10T14:53:02.791188+0000","last_fullsized":"2026-03-10T14:53:02.791188+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:15:38.509873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210186+0000","last_change":"2026-03-10T14:51:58.952345+0000","last_active":"2026-03-10T14:52:24.210186+0000","last_peered":"2026-03-10T14:52:24.210186+0000","last_clean":"2026-03-10T14:52:24.210186+0000","last_became_active":"2026-03-10T14:51:58.951477+0000","last_became_peered":"2026-03-10T14:51:58.951477+0000","last_unstale":"2026-03-10T14:52:24.210186+0000","last_undegraded":"2026-03-10T14:52:24.210186+0000","last_fullsized":"2026-03-10T14:52:24.210186+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:55.833148+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"65'12","reported_seq":52,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328706+0000","last_change":"2026-03-10T14:51:54.927009+0000","last_active":"2026-03-10T14:52:24.328706+0000","last_peered":"2026-03-10T14:52:24.328706+0000","last_clean":"2026-03-10T14:52:24.328706+0000","last_became_active":"2026-03-10T14:51:54.926904+0000","last_became_peered":"2026-03-10T14:51:54.926904+0000","last_unstale":"2026-03-10T14:52:24.328706+0000","last_undegraded":"2026-03-10T14:52:24.328706+0000","last_fullsized":"2026-03-10T14:52:24.328706+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:08:30.966606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263231+0000","last_change":"2026-03-10T14:51:52.748200+0000","last_active":"2026-03-10T14:52:25.263231+0000","last_peered":"2026-03-10T14:52:25.263231+0000","last_clean":"2026-03-10T14:52:25.263231+0000","last_became_active":"2026-03-10T14:51:52.748100+0000","last_became_peered":"2026-03-10T14:51:52.748100+0000","last_unstale":"2026-03-10T14:52:25.263231+0000","last_undegraded":"2026-03-10T14:52:25.263231+0000","last_fullsized":"2026-03-10T14:52:25.263231+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:21:43.252153+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211550+0000","last_change":"2026-03-10T14:51:56.939022+0000","last_active":"2026-03-10T14:52:24.211550+0000","last_peered":"2026-03-10T14:52:24.211550+0000","last_clean":"2026-03-10T14:52:24.211550+0000","last_became_active":"2026-03-10T14:51:56.938004+0000","last_became_peered":"2026-03-10T14:51:56.938004+0000","last_unstale":"2026-03-10T14:52:24.211550+0000","last_undegraded":"2026-03-10T14:52:24.211550+0000","last_fullsized":"2026-03-10T14:52:24.211550+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:13.390551+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.336970+0000","last_change":"2026-03-10T14:51:59.513685+0000","last_active":"2026-03-10T14:52:24.336970+0000","last_peered":"2026-03-10T14:52:24.336970+0000","last_clean":"2026-03-10T14:52:24.336970+0000","last_became_active":"2026-03-10T14:51:59.513553+0000","last_became_peered":"2026-03-10T14:51:59.513553+0000","last_unstale":"2026-03-10T14:52:24.336970+0000","last_undegraded":"2026-03-10T14:52:24.336970+0000","last_fullsized":"2026-03-10T14:52:24.336970+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:08:16.694200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"65'12","reported_seq":47,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214518+0000","last_change":"2026-03-10T14:51:54.931786+0000","last_active":"2026-03-10T14:52:24.214518+0000","last_peered":"2026-03-10T14:52:24.214518+0000","last_clean":"2026-03-10T14:52:24.214518+0000","last_became_active":"2026-03-10T14:51:54.931647+0000","last_became_peered":"2026-03-10T14:51:54.931647+0000","last_unstale":"2026-03-10T14:52:24.214518+0000","last_undegraded":"2026-03-10T14:52:24.214518+0000","last_fullsized":"2026-03-10T14:52:24.214518+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:07:39.349991+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243323+0000","last_change":"2026-03-10T14:51:52.741992+0000","last_active":"2026-03-10T14:52:25.243323+0000","last_peered":"2026-03-10T14:52:25.243323+0000","last_clean":"2026-03-10T14:52:25.243323+0000","last_became_active":"2026-03-10T14:51:52.741785+0000","last_became_peered":"2026-03-10T14:51:52.741785+0000","last_unstale":"2026-03-10T14:52:25.243323+0000","last_undegraded":"2026-03-10T14:52:25.243323+0000","last_fullsized":"2026-03-10T14:52:25.243323+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:19:06.234347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"65'1","reported_seq":35,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329766+0000","last_change":"2026-03-10T14:52:01.993419+0000","last_active":"2026-03-10T14:52:24.329766+0000","last_peered":"2026-03-10T14:52:24.329766+0000","last_clean":"2026-03-10T14:52:24.329766+0000","last_became_active":"2026-03-10T14:51:55.924128+0000","last_became_peered":"2026-03-10T14:51:55.924128+0000","last_unstale":"2026-03-10T14:52:24.329766+0000","last_undegraded":"2026-03-10T14:52:24.329766+0000","last_fullsized":"2026-03-10T14:52:24.329766+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:36.513075+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025000700000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793596+0000","last_change":"2026-03-10T14:51:56.939092+0000","last_active":"2026-03-10T14:53:02.793596+0000","last_peered":"2026-03-10T14:53:02.793596+0000","last_clean":"2026-03-10T14:53:02.793596+0000","last_became_active":"2026-03-10T14:51:56.938931+0000","last_became_peered":"2026-03-10T14:51:56.938931+0000","last_unstale":"2026-03-10T14:53:02.793596+0000","last_undegraded":"2026-03-10T14:53:02.793596+0000","last_fullsized":"2026-03-10T14:53:02.793596+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:30:37.620150+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263735+0000","last_change":"2026-03-10T14:51:59.508728+0000","last_active":"2026-03-10T14:52:25.263735+0000","last_peered":"2026-03-10T14:52:25.263735+0000","last_clean":"2026-03-10T14:52:25.263735+0000","last_became_active":"2026-03-10T14:51:59.508529+0000","last_became_peered":"2026-03-10T14:51:59.508529+0000","last_unstale":"2026-03-10T14:52:25.263735+0000","last_undegraded":"2026-03-10T14:52:25.263735+0000","last_fullsized":"2026-03-10T14:52:25.263735+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:36.680680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"65'13","reported_seq":56,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337873+0000","last_change":"2026-03-10T14:51:54.918043+0000","last_active":"2026-03-10T14:52:24.337873+0000","last_peered":"2026-03-10T14:52:24.337873+0000","last_clean":"2026-03-10T14:52:24.337873+0000","last_became_active":"2026-03-10T14:51:54.917933+0000","last_became_peered":"2026-03-10T14:51:54.917933+0000","last_unstale":"2026-03-10T14:52:24.337873+0000","last_undegraded":"2026-03-10T14:52:24.337873+0000","last_fullsized":"2026-03-10T14:52:24.337873+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:05:08.995873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"58'1","reported_seq":34,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211927+0000","last_change":"2026-03-10T14:51:52.733363+0000","last_active":"2026-03-10T14:52:24.211927+0000","last_peered":"2026-03-10T14:52:24.211927+0000","last_clean":"2026-03-10T14:52:24.211927+0000","last_became_active":"2026-03-10T14:51:52.733222+0000","last_became_peered":"2026-03-10T14:51:52.733222+0000","last_unstale":"2026-03-10T14:52:24.211927+0000","last_undegraded":"2026-03-10T14:52:24.211927+0000","last_fullsized":"2026-03-10T14:52:24.211927+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:59:28.187606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"67'5","reported_seq":104,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:08.565113+0000","last_change":"2026-03-10T14:52:02.062549+0000","last_active":"2026-03-10T14:53:08.565113+0000","last_peered":"2026-03-10T14:53:08.565113+0000","last_clean":"2026-03-10T14:53:08.565113+0000","last_became_active":"2026-03-10T14:51:55.924328+0000","last_became_peered":"2026-03-10T14:51:55.924328+0000","last_unstale":"2026-03-10T14:53:08.565113+0000","last_undegraded":"2026-03-10T14:53:08.565113+0000","last_fullsized":"2026-03-10T14:53:08.565113+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:12.007760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000314672,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329365+0000","last_change":"2026-03-10T14:51:56.943429+0000","last_active":"2026-03-10T14:52:24.329365+0000","last_peered":"2026-03-10T14:52:24.329365+0000","last_clean":"2026-03-10T14:52:24.329365+0000","last_became_active":"2026-03-10T14:51:56.943195+0000","last_became_peered":"2026-03-10T14:51:56.943195+0000","last_unstale":"2026-03-10T14:52:24.329365+0000","last_undegraded":"2026-03-10T14:52:24.329365+0000","last_fullsized":"2026-03-10T14:52:24.329365+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:56:39.901791+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329327+0000","last_change":"2026-03-10T14:51:58.942318+0000","last_active":"2026-03-10T14:52:24.329327+0000","last_peered":"2026-03-10T14:52:24.329327+0000","last_clean":"2026-03-10T14:52:24.329327+0000","last_became_active":"2026-03-10T14:51:58.942153+0000","last_became_peered":"2026-03-10T14:51:58.942153+0000","last_unstale":"2026-03-10T14:52:24.329327+0000","last_undegraded":"2026-03-10T14:52:24.329327+0000","last_fullsized":"2026-03-10T14:52:24.329327+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:08:33.354217+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"65'30","reported_seq":94,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.793834+0000","last_change":"2026-03-10T14:51:54.934244+0000","last_active":"2026-03-10T14:52:57.793834+0000","last_peered":"2026-03-10T14:52:57.793834+0000","last_clean":"2026-03-10T14:52:57.793834+0000","last_became_active":"2026-03-10T14:51:54.934074+0000","last_became_peered":"2026-03-10T14:51:54.934074+0000","last_unstale":"2026-03-10T14:52:57.793834+0000","last_undegraded":"2026-03-10T14:52:57.793834+0000","last_fullsized":"2026-03-10T14:52:57.793834+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:08:46.795398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263265+0000","last_change":"2026-03-10T14:51:52.747373+0000","last_active":"2026-03-10T14:52:25.263265+0000","last_peered":"2026-03-10T14:52:25.263265+0000","last_clean":"2026-03-10T14:52:25.263265+0000","last_became_active":"2026-03-10T14:51:52.747060+0000","last_became_peered":"2026-03-10T14:51:52.747060+0000","last_unstale":"2026-03-10T14:52:25.263265+0000","last_undegraded":"2026-03-10T14:52:25.263265+0000","last_fullsized":"2026-03-10T14:52:25.263265+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:15:26.083583+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243562+0000","last_change":"2026-03-10T14:51:56.952131+0000","last_active":"2026-03-10T14:52:25.243562+0000","last_peered":"2026-03-10T14:52:25.243562+0000","last_clean":"2026-03-10T14:52:25.243562+0000","last_became_active":"2026-03-10T14:51:56.952035+0000","last_became_peered":"2026-03-10T14:51:56.952035+0000","last_unstale":"2026-03-10T14:52:25.243562+0000","last_undegraded":"2026-03-10T14:52:25.243562+0000","last_fullsized":"2026-03-10T14:52:25.243562+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:07:25.644632+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212310+0000","last_change":"2026-03-10T14:51:59.466645+0000","last_active":"2026-03-10T14:52:24.212310+0000","last_peered":"2026-03-10T14:52:24.212310+0000","last_clean":"2026-03-10T14:52:24.212310+0000","last_became_active":"2026-03-10T14:51:59.465974+0000","last_became_peered":"2026-03-10T14:51:59.465974+0000","last_unstale":"2026-03-10T14:52:24.212310+0000","last_undegraded":"2026-03-10T14:52:24.212310+0000","last_fullsized":"2026-03-10T14:52:24.212310+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:28:00.892473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"65'16","reported_seq":66,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.791403+0000","last_change":"2026-03-10T14:51:54.930086+0000","last_active":"2026-03-10T14:52:57.791403+0000","last_peered":"2026-03-10T14:52:57.791403+0000","last_clean":"2026-03-10T14:52:57.791403+0000","last_became_active":"2026-03-10T14:51:54.925263+0000","last_became_peered":"2026-03-10T14:51:54.925263+0000","last_unstale":"2026-03-10T14:52:57.791403+0000","last_undegraded":"2026-03-10T14:52:57.791403+0000","last_fullsized":"2026-03-10T14:52:57.791403+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:23:48.577488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211960+0000","last_change":"2026-03-10T14:51:52.729263+0000","last_active":"2026-03-10T14:52:24.211960+0000","last_peered":"2026-03-10T14:52:24.211960+0000","last_clean":"2026-03-10T14:52:24.211960+0000","last_became_active":"2026-03-10T14:51:52.729190+0000","last_became_peered":"2026-03-10T14:51:52.729190+0000","last_unstale":"2026-03-10T14:52:24.211960+0000","last_undegraded":"2026-03-10T14:52:24.211960+0000","last_fullsized":"2026-03-10T14:52:24.211960+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:36:34.378261+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"67'2","reported_seq":36,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211985+0000","last_change":"2026-03-10T14:52:02.065640+0000","last_active":"2026-03-10T14:52:24.211985+0000","last_peered":"2026-03-10T14:52:24.211985+0000","last_clean":"2026-03-10T14:52:24.211985+0000","last_became_active":"2026-03-10T14:51:55.930421+0000","last_became_peered":"2026-03-10T14:51:55.930421+0000","last_unstale":"2026-03-10T14:52:24.211985+0000","last_undegraded":"2026-03-10T14:52:24.211985+0000","last_fullsized":"2026-03-10T14:52:24.211985+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:36:25.736444+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0010901909999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793746+0000","last_change":"2026-03-10T14:51:56.937408+0000","last_active":"2026-03-10T14:53:02.793746+0000","last_peered":"2026-03-10T14:53:02.793746+0000","last_clean":"2026-03-10T14:53:02.793746+0000","last_became_active":"2026-03-10T14:51:56.937324+0000","last_became_peered":"2026-03-10T14:51:56.937324+0000","last_unstale":"2026-03-10T14:53:02.793746+0000","last_undegraded":"2026-03-10T14:53:02.793746+0000","last_fullsized":"2026-03-10T14:53:02.793746+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:16:18.375445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214023+0000","last_change":"2026-03-10T14:51:58.942257+0000","last_active":"2026-03-10T14:52:24.214023+0000","last_peered":"2026-03-10T14:52:24.214023+0000","last_clean":"2026-03-10T14:52:24.214023+0000","last_became_active":"2026-03-10T14:51:58.942107+0000","last_became_peered":"2026-03-10T14:51:58.942107+0000","last_unstale":"2026-03-10T14:52:24.214023+0000","last_undegraded":"2026-03-10T14:52:24.214023+0000","last_fullsized":"2026-03-10T14:52:24.214023+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:10.008438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"65'19","reported_seq":65,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329159+0000","last_change":"2026-03-10T14:51:54.926713+0000","last_active":"2026-03-10T14:52:24.329159+0000","last_peered":"2026-03-10T14:52:24.329159+0000","last_clean":"2026-03-10T14:52:24.329159+0000","last_became_active":"2026-03-10T14:51:54.926410+0000","last_became_peered":"2026-03-10T14:51:54.926410+0000","last_unstale":"2026-03-10T14:52:24.329159+0000","last_undegraded":"2026-03-10T14:52:24.329159+0000","last_fullsized":"2026-03-10T14:52:24.329159+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:59.827455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211250+0000","last_change":"2026-03-10T14:51:52.729818+0000","last_active":"2026-03-10T14:52:24.211250+0000","last_peered":"2026-03-10T14:52:24.211250+0000","last_clean":"2026-03-10T14:52:24.211250+0000","last_became_active":"2026-03-10T14:51:52.728865+0000","last_became_peered":"2026-03-10T14:51:52.728865+0000","last_unstale":"2026-03-10T14:52:24.211250+0000","last_undegraded":"2026-03-10T14:52:24.211250+0000","last_fullsized":"2026-03-10T14:52:24.211250+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:53:19.462601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213819+0000","last_change":"2026-03-10T14:51:56.937065+0000","last_active":"2026-03-10T14:52:24.213819+0000","last_peered":"2026-03-10T14:52:24.213819+0000","last_clean":"2026-03-10T14:52:24.213819+0000","last_became_active":"2026-03-10T14:51:56.936957+0000","last_became_peered":"2026-03-10T14:51:56.936957+0000","last_unstale":"2026-03-10T14:52:24.213819+0000","last_undegraded":"2026-03-10T14:52:24.213819+0000","last_fullsized":"2026-03-10T14:52:24.213819+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:37.739743+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"65'1","reported_seq":22,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337054+0000","last_change":"2026-03-10T14:51:58.946891+0000","last_active":"2026-03-10T14:52:24.337054+0000","last_peered":"2026-03-10T14:52:24.337054+0000","last_clean":"2026-03-10T14:52:24.337054+0000","last_became_active":"2026-03-10T14:51:58.946674+0000","last_became_peered":"2026-03-10T14:51:58.946674+0000","last_unstale":"2026-03-10T14:52:24.337054+0000","last_undegraded":"2026-03-10T14:52:24.337054+0000","last_fullsized":"2026-03-10T14:52:24.337054+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:58:56.849208+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"65'18","reported_seq":61,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212245+0000","last_change":"2026-03-10T14:51:54.924261+0000","last_active":"2026-03-10T14:52:24.212245+0000","last_peered":"2026-03-10T14:52:24.212245+0000","last_clean":"2026-03-10T14:52:24.212245+0000","last_became_active":"2026-03-10T14:51:54.924158+0000","last_became_peered":"2026-03-10T14:51:54.924158+0000","last_unstale":"2026-03-10T14:52:24.212245+0000","last_undegraded":"2026-03-10T14:52:24.212245+0000","last_fullsized":"2026-03-10T14:52:24.212245+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:14:52.917121+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193999+0000","last_change":"2026-03-10T14:51:52.734331+0000","last_active":"2026-03-10T14:52:25.193999+0000","last_peered":"2026-03-10T14:52:25.193999+0000","last_clean":"2026-03-10T14:52:25.193999+0000","last_became_active":"2026-03-10T14:51:52.733969+0000","last_became_peered":"2026-03-10T14:51:52.733969+0000","last_unstale":"2026-03-10T14:52:25.193999+0000","last_undegraded":"2026-03-10T14:52:25.193999+0000","last_fullsized":"2026-03-10T14:52:25.193999+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:53:58.982626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194216+0000","last_change":"2026-03-10T14:51:56.941101+0000","last_active":"2026-03-10T14:52:25.194216+0000","last_peered":"2026-03-10T14:52:25.194216+0000","last_clean":"2026-03-10T14:52:25.194216+0000","last_became_active":"2026-03-10T14:51:56.940938+0000","last_became_peered":"2026-03-10T14:51:56.940938+0000","last_unstale":"2026-03-10T14:52:25.194216+0000","last_undegraded":"2026-03-10T14:52:25.194216+0000","last_fullsized":"2026-03-10T14:52:25.194216+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:05:35.985365+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264981+0000","last_change":"2026-03-10T14:51:59.508442+0000","last_active":"2026-03-10T14:52:25.264981+0000","last_peered":"2026-03-10T14:52:25.264981+0000","last_clean":"2026-03-10T14:52:25.264981+0000","last_became_active":"2026-03-10T14:51:59.508307+0000","last_became_peered":"2026-03-10T14:51:59.508307+0000","last_unstale":"2026-03-10T14:52:25.264981+0000","last_undegraded":"2026-03-10T14:52:25.264981+0000","last_fullsized":"2026-03-10T14:52:25.264981+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:30:19.067728+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"65'14","reported_seq":50,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214510+0000","last_change":"2026-03-10T14:51:54.931494+0000","last_active":"2026-03-10T14:52:24.214510+0000","last_peered":"2026-03-10T14:52:24.214510+0000","last_clean":"2026-03-10T14:52:24.214510+0000","last_became_active":"2026-03-10T14:51:54.931402+0000","last_became_peered":"2026-03-10T14:51:54.931402+0000","last_unstale":"2026-03-10T14:52:24.214510+0000","last_undegraded":"2026-03-10T14:52:24.214510+0000","last_fullsized":"2026-03-10T14:52:24.214510+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:58:36.522315+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263142+0000","last_change":"2026-03-10T14:51:52.751250+0000","last_active":"2026-03-10T14:52:25.263142+0000","last_peered":"2026-03-10T14:52:25.263142+0000","last_clean":"2026-03-10T14:52:25.263142+0000","last_became_active":"2026-03-10T14:51:52.750061+0000","last_became_peered":"2026-03-10T14:51:52.750061+0000","last_unstale":"2026-03-10T14:52:25.263142+0000","last_undegraded":"2026-03-10T14:52:25.263142+0000","last_fullsized":"2026-03-10T14:52:25.263142+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:29:24.434390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211401+0000","last_change":"2026-03-10T14:51:56.941452+0000","last_active":"2026-03-10T14:52:24.211401+0000","last_peered":"2026-03-10T14:52:24.211401+0000","last_clean":"2026-03-10T14:52:24.211401+0000","last_became_active":"2026-03-10T14:51:56.939000+0000","last_became_peered":"2026-03-10T14:51:56.939000+0000","last_unstale":"2026-03-10T14:52:24.211401+0000","last_undegraded":"2026-03-10T14:52:24.211401+0000","last_fullsized":"2026-03-10T14:52:24.211401+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:06:07.919881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212043+0000","last_change":"2026-03-10T14:51:58.952090+0000","last_active":"2026-03-10T14:52:24.212043+0000","last_peered":"2026-03-10T14:52:24.212043+0000","last_clean":"2026-03-10T14:52:24.212043+0000","last_became_active":"2026-03-10T14:51:58.951968+0000","last_became_peered":"2026-03-10T14:51:58.951968+0000","last_unstale":"2026-03-10T14:52:24.212043+0000","last_undegraded":"2026-03-10T14:52:24.212043+0000","last_fullsized":"2026-03-10T14:52:24.212043+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:33:36.358829+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337801+0000","last_change":"2026-03-10T14:51:54.936148+0000","last_active":"2026-03-10T14:52:24.337801+0000","last_peered":"2026-03-10T14:52:24.337801+0000","last_clean":"2026-03-10T14:52:24.337801+0000","last_became_active":"2026-03-10T14:51:54.935998+0000","last_became_peered":"2026-03-10T14:51:54.935998+0000","last_unstale":"2026-03-10T14:52:24.337801+0000","last_undegraded":"2026-03-10T14:52:24.337801+0000","last_fullsized":"2026-03-10T14:52:24.337801+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:52:40.667935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210651+0000","last_change":"2026-03-10T14:51:52.743019+0000","last_active":"2026-03-10T14:52:24.210651+0000","last_peered":"2026-03-10T14:52:24.210651+0000","last_clean":"2026-03-10T14:52:24.210651+0000","last_became_active":"2026-03-10T14:51:52.742637+0000","last_became_peered":"2026-03-10T14:51:52.742637+0000","last_unstale":"2026-03-10T14:52:24.210651+0000","last_undegraded":"2026-03-10T14:52:24.210651+0000","last_fullsized":"2026-03-10T14:52:24.210651+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:32:49.414867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"69'39","reported_seq":68,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:26.497710+0000","last_change":"2026-03-10T14:51:33.247484+0000","last_active":"2026-03-10T14:52:26.497710+0000","last_peered":"2026-03-10T14:52:26.497710+0000","last_clean":"2026-03-10T14:52:26.497710+0000","last_became_active":"2026-03-10T14:51:33.241993+0000","last_became_peered":"2026-03-10T14:51:33.241993+0000","last_unstale":"2026-03-10T14:52:26.497710+0000","last_undegraded":"2026-03-10T14:52:26.497710+0000","last_fullsized":"2026-03-10T14:52:26.497710+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:48:39.387534+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:48:39.387534+0000","last_clean_scrub_stamp":"2026-03-10T14:48:39.387534+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:24.045522+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264286+0000","last_change":"2026-03-10T14:51:56.951571+0000","last_active":"2026-03-10T14:52:25.264286+0000","last_peered":"2026-03-10T14:52:25.264286+0000","last_clean":"2026-03-10T14:52:25.264286+0000","last_became_active":"2026-03-10T14:51:56.951407+0000","last_became_peered":"2026-03-10T14:51:56.951407+0000","last_unstale":"2026-03-10T14:52:25.264286+0000","last_undegraded":"2026-03-10T14:52:25.264286+0000","last_fullsized":"2026-03-10T14:52:25.264286+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:58:57.435129+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210597+0000","last_change":"2026-03-10T14:51:58.952595+0000","last_active":"2026-03-10T14:52:24.210597+0000","last_peered":"2026-03-10T14:52:24.210597+0000","last_clean":"2026-03-10T14:52:24.210597+0000","last_became_active":"2026-03-10T14:51:58.952392+0000","last_became_peered":"2026-03-10T14:51:58.952392+0000","last_unstale":"2026-03-10T14:52:24.210597+0000","last_undegraded":"2026-03-10T14:52:24.210597+0000","last_fullsized":"2026-03-10T14:52:24.210597+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:11:06.409397+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"65'17","reported_seq":57,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263587+0000","last_change":"2026-03-10T14:51:54.931004+0000","last_active":"2026-03-10T14:52:25.263587+0000","last_peered":"2026-03-10T14:52:25.263587+0000","last_clean":"2026-03-10T14:52:25.263587+0000","last_became_active":"2026-03-10T14:51:54.930756+0000","last_became_peered":"2026-03-10T14:51:54.930756+0000","last_unstale":"2026-03-10T14:52:25.263587+0000","last_undegraded":"2026-03-10T14:52:25.263587+0000","last_fullsized":"2026-03-10T14:52:25.263587+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:47:37.656055+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193880+0000","last_change":"2026-03-10T14:51:52.732626+0000","last_active":"2026-03-10T14:52:25.193880+0000","last_peered":"2026-03-10T14:52:25.193880+0000","last_clean":"2026-03-10T14:52:25.193880+0000","last_became_active":"2026-03-10T14:51:52.732457+0000","last_became_peered":"2026-03-10T14:51:52.732457+0000","last_unstale":"2026-03-10T14:52:25.193880+0000","last_undegraded":"2026-03-10T14:52:25.193880+0000","last_fullsized":"2026-03-10T14:52:25.193880+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:24:10.879181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194365+0000","last_change":"2026-03-10T14:51:56.922668+0000","last_active":"2026-03-10T14:52:25.194365+0000","last_peered":"2026-03-10T14:52:25.194365+0000","last_clean":"2026-03-10T14:52:25.194365+0000","last_became_active":"2026-03-10T14:51:56.922427+0000","last_became_peered":"2026-03-10T14:51:56.922427+0000","last_unstale":"2026-03-10T14:52:25.194365+0000","last_undegraded":"2026-03-10T14:52:25.194365+0000","last_fullsized":"2026-03-10T14:52:25.194365+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:08.480376+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263628+0000","last_change":"2026-03-10T14:51:58.942958+0000","last_active":"2026-03-10T14:52:25.263628+0000","last_peered":"2026-03-10T14:52:25.263628+0000","last_clean":"2026-03-10T14:52:25.263628+0000","last_became_active":"2026-03-10T14:51:58.942873+0000","last_became_peered":"2026-03-10T14:51:58.942873+0000","last_unstale":"2026-03-10T14:52:25.263628+0000","last_undegraded":"2026-03-10T14:52:25.263628+0000","last_fullsized":"2026-03-10T14:52:25.263628+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:43:34.921933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211277+0000","last_change":"2026-03-10T14:51:54.930155+0000","last_active":"2026-03-10T14:52:24.211277+0000","last_peered":"2026-03-10T14:52:24.211277+0000","last_clean":"2026-03-10T14:52:24.211277+0000","last_became_active":"2026-03-10T14:51:54.925390+0000","last_became_peered":"2026-03-10T14:51:54.925390+0000","last_unstale":"2026-03-10T14:52:24.211277+0000","last_undegraded":"2026-03-10T14:52:24.211277+0000","last_fullsized":"2026-03-10T14:52:24.211277+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:31:57.174372+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212200+0000","last_change":"2026-03-10T14:51:52.727337+0000","last_active":"2026-03-10T14:52:24.212200+0000","last_peered":"2026-03-10T14:52:24.212200+0000","last_clean":"2026-03-10T14:52:24.212200+0000","last_became_active":"2026-03-10T14:51:52.726653+0000","last_became_peered":"2026-03-10T14:51:52.726653+0000","last_unstale":"2026-03-10T14:52:24.212200+0000","last_undegraded":"2026-03-10T14:52:24.212200+0000","last_fullsized":"2026-03-10T14:52:24.212200+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:18:28.992971+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194616+0000","last_change":"2026-03-10T14:51:56.935038+0000","last_active":"2026-03-10T14:52:25.194616+0000","last_peered":"2026-03-10T14:52:25.194616+0000","last_clean":"2026-03-10T14:52:25.194616+0000","last_became_active":"2026-03-10T14:51:56.934934+0000","last_became_peered":"2026-03-10T14:51:56.934934+0000","last_unstale":"2026-03-10T14:52:25.194616+0000","last_undegraded":"2026-03-10T14:52:25.194616+0000","last_fullsized":"2026-03-10T14:52:25.194616+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:36:17.238564+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213556+0000","last_change":"2026-03-10T14:51:58.945305+0000","last_active":"2026-03-10T14:52:24.213556+0000","last_peered":"2026-03-10T14:52:24.213556+0000","last_clean":"2026-03-10T14:52:24.213556+0000","last_became_active":"2026-03-10T14:51:58.945190+0000","last_became_peered":"2026-03-10T14:51:58.945190+0000","last_unstale":"2026-03-10T14:52:24.213556+0000","last_undegraded":"2026-03-10T14:52:24.213556+0000","last_fullsized":"2026-03-10T14:52:24.213556+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:58:43.875107+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263789+0000","last_change":"2026-03-10T14:51:54.927635+0000","last_active":"2026-03-10T14:52:25.263789+0000","last_peered":"2026-03-10T14:52:25.263789+0000","last_clean":"2026-03-10T14:52:25.263789+0000","last_became_active":"2026-03-10T14:51:54.927541+0000","last_became_peered":"2026-03-10T14:51:54.927541+0000","last_unstale":"2026-03-10T14:52:25.263789+0000","last_undegraded":"2026-03-10T14:52:25.263789+0000","last_fullsized":"2026-03-10T14:52:25.263789+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:49.822249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194036+0000","last_change":"2026-03-10T14:51:52.744208+0000","last_active":"2026-03-10T14:52:25.194036+0000","last_peered":"2026-03-10T14:52:25.194036+0000","last_clean":"2026-03-10T14:52:25.194036+0000","last_became_active":"2026-03-10T14:51:52.744054+0000","last_became_peered":"2026-03-10T14:51:52.744054+0000","last_unstale":"2026-03-10T14:52:25.194036+0000","last_undegraded":"2026-03-10T14:52:25.194036+0000","last_fullsized":"2026-03-10T14:52:25.194036+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:09:38.862290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"65'11","reported_seq":53,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.791840+0000","last_change":"2026-03-10T14:51:56.948808+0000","last_active":"2026-03-10T14:52:57.791840+0000","last_peered":"2026-03-10T14:52:57.791840+0000","last_clean":"2026-03-10T14:52:57.791840+0000","last_became_active":"2026-03-10T14:51:56.948627+0000","last_became_peered":"2026-03-10T14:51:56.948627+0000","last_unstale":"2026-03-10T14:52:57.791840+0000","last_undegraded":"2026-03-10T14:52:57.791840+0000","last_fullsized":"2026-03-10T14:52:57.791840+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:54:05.780560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210172+0000","last_change":"2026-03-10T14:51:59.510444+0000","last_active":"2026-03-10T14:52:24.210172+0000","last_peered":"2026-03-10T14:52:24.210172+0000","last_clean":"2026-03-10T14:52:24.210172+0000","last_became_active":"2026-03-10T14:51:59.510288+0000","last_became_peered":"2026-03-10T14:51:59.510288+0000","last_unstale":"2026-03-10T14:52:24.210172+0000","last_undegraded":"2026-03-10T14:52:24.210172+0000","last_fullsized":"2026-03-10T14:52:24.210172+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:28:56.469567+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264457+0000","last_change":"2026-03-10T14:51:54.938929+0000","last_active":"2026-03-10T14:52:25.264457+0000","last_peered":"2026-03-10T14:52:25.264457+0000","last_clean":"2026-03-10T14:52:25.264457+0000","last_became_active":"2026-03-10T14:51:54.927755+0000","last_became_peered":"2026-03-10T14:51:54.927755+0000","last_unstale":"2026-03-10T14:52:25.264457+0000","last_undegraded":"2026-03-10T14:52:25.264457+0000","last_fullsized":"2026-03-10T14:52:25.264457+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:19:34.531530+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"58'2","reported_seq":49,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329494+0000","last_change":"2026-03-10T14:51:52.746829+0000","last_active":"2026-03-10T14:52:24.329494+0000","last_peered":"2026-03-10T14:52:24.329494+0000","last_clean":"2026-03-10T14:52:24.329494+0000","last_became_active":"2026-03-10T14:51:52.746683+0000","last_became_peered":"2026-03-10T14:51:52.746683+0000","last_unstale":"2026-03-10T14:52:24.329494+0000","last_undegraded":"2026-03-10T14:52:24.329494+0000","last_fullsized":"2026-03-10T14:52:24.329494+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:44:22.908143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194611+0000","last_change":"2026-03-10T14:51:56.927644+0000","last_active":"2026-03-10T14:52:25.194611+0000","last_peered":"2026-03-10T14:52:25.194611+0000","last_clean":"2026-03-10T14:52:25.194611+0000","last_became_active":"2026-03-10T14:51:56.927552+0000","last_became_peered":"2026-03-10T14:51:56.927552+0000","last_unstale":"2026-03-10T14:52:25.194611+0000","last_undegraded":"2026-03-10T14:52:25.194611+0000","last_fullsized":"2026-03-10T14:52:25.194611+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:11:10.187287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337013+0000","last_change":"2026-03-10T14:51:58.946829+0000","last_active":"2026-03-10T14:52:24.337013+0000","last_peered":"2026-03-10T14:52:24.337013+0000","last_clean":"2026-03-10T14:52:24.337013+0000","last_became_active":"2026-03-10T14:51:58.946526+0000","last_became_peered":"2026-03-10T14:51:58.946526+0000","last_unstale":"2026-03-10T14:52:24.337013+0000","last_undegraded":"2026-03-10T14:52:24.337013+0000","last_fullsized":"2026-03-10T14:52:24.337013+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:06.581325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263534+0000","last_change":"2026-03-10T14:51:54.928240+0000","last_active":"2026-03-10T14:52:25.263534+0000","last_peered":"2026-03-10T14:52:25.263534+0000","last_clean":"2026-03-10T14:52:25.263534+0000","last_became_active":"2026-03-10T14:51:54.928008+0000","last_became_peered":"2026-03-10T14:51:54.928008+0000","last_unstale":"2026-03-10T14:52:25.263534+0000","last_undegraded":"2026-03-10T14:52:25.263534+0000","last_fullsized":"2026-03-10T14:52:25.263534+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:42:43.013471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193955+0000","last_change":"2026-03-10T14:51:52.733695+0000","last_active":"2026-03-10T14:52:25.193955+0000","last_peered":"2026-03-10T14:52:25.193955+0000","last_clean":"2026-03-10T14:52:25.193955+0000","last_became_active":"2026-03-10T14:51:52.732756+0000","last_became_peered":"2026-03-10T14:51:52.732756+0000","last_unstale":"2026-03-10T14:52:25.193955+0000","last_undegraded":"2026-03-10T14:52:25.193955+0000","last_fullsized":"2026-03-10T14:52:25.193955+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:20:33.261227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337143+0000","last_change":"2026-03-10T14:51:56.940374+0000","last_active":"2026-03-10T14:52:24.337143+0000","last_peered":"2026-03-10T14:52:24.337143+0000","last_clean":"2026-03-10T14:52:24.337143+0000","last_became_active":"2026-03-10T14:51:56.939967+0000","last_became_peered":"2026-03-10T14:51:56.939967+0000","last_unstale":"2026-03-10T14:52:24.337143+0000","last_undegraded":"2026-03-10T14:52:24.337143+0000","last_fullsized":"2026-03-10T14:52:24.337143+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:18.799940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194272+0000","last_change":"2026-03-10T14:51:58.948933+0000","last_active":"2026-03-10T14:52:25.194272+0000","last_peered":"2026-03-10T14:52:25.194272+0000","last_clean":"2026-03-10T14:52:25.194272+0000","last_became_active":"2026-03-10T14:51:58.948783+0000","last_became_peered":"2026-03-10T14:51:58.948783+0000","last_unstale":"2026-03-10T14:52:25.194272+0000","last_undegraded":"2026-03-10T14:52:25.194272+0000","last_fullsized":"2026-03-10T14:52:25.194272+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:27:17.812468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"65'4","reported_seq":35,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243256+0000","last_change":"2026-03-10T14:51:54.932295+0000","last_active":"2026-03-10T14:52:25.243256+0000","last_peered":"2026-03-10T14:52:25.243256+0000","last_clean":"2026-03-10T14:52:25.243256+0000","last_became_active":"2026-03-10T14:51:54.932181+0000","last_became_peered":"2026-03-10T14:51:54.932181+0000","last_unstale":"2026-03-10T14:52:25.243256+0000","last_undegraded":"2026-03-10T14:52:25.243256+0000","last_fullsized":"2026-03-10T14:52:25.243256+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:28:40.352238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242355+0000","last_change":"2026-03-10T14:51:52.728606+0000","last_active":"2026-03-10T14:52:25.242355+0000","last_peered":"2026-03-10T14:52:25.242355+0000","last_clean":"2026-03-10T14:52:25.242355+0000","last_became_active":"2026-03-10T14:51:52.728420+0000","last_became_peered":"2026-03-10T14:51:52.728420+0000","last_unstale":"2026-03-10T14:52:25.242355+0000","last_undegraded":"2026-03-10T14:52:25.242355+0000","last_fullsized":"2026-03-10T14:52:25.242355+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:01:50.018046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211507+0000","last_change":"2026-03-10T14:51:56.939190+0000","last_active":"2026-03-10T14:52:24.211507+0000","last_peered":"2026-03-10T14:52:24.211507+0000","last_clean":"2026-03-10T14:52:24.211507+0000","last_became_active":"2026-03-10T14:51:56.938188+0000","last_became_peered":"2026-03-10T14:51:56.938188+0000","last_unstale":"2026-03-10T14:52:24.211507+0000","last_undegraded":"2026-03-10T14:52:24.211507+0000","last_fullsized":"2026-03-10T14:52:24.211507+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:13:33.020929+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.265070+0000","last_change":"2026-03-10T14:51:59.508610+0000","last_active":"2026-03-10T14:52:25.265070+0000","last_peered":"2026-03-10T14:52:25.265070+0000","last_clean":"2026-03-10T14:52:25.265070+0000","last_became_active":"2026-03-10T14:51:59.508350+0000","last_became_peered":"2026-03-10T14:51:59.508350+0000","last_unstale":"2026-03-10T14:52:25.265070+0000","last_undegraded":"2026-03-10T14:52:25.265070+0000","last_fullsized":"2026-03-10T14:52:25.265070+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:17:04.062333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263890+0000","last_change":"2026-03-10T14:51:54.927522+0000","last_active":"2026-03-10T14:52:25.263890+0000","last_peered":"2026-03-10T14:52:25.263890+0000","last_clean":"2026-03-10T14:52:25.263890+0000","last_became_active":"2026-03-10T14:51:54.927436+0000","last_became_peered":"2026-03-10T14:51:54.927436+0000","last_unstale":"2026-03-10T14:52:25.263890+0000","last_undegraded":"2026-03-10T14:52:25.263890+0000","last_fullsized":"2026-03-10T14:52:25.263890+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:07:05.164262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211030+0000","last_change":"2026-03-10T14:51:52.742604+0000","last_active":"2026-03-10T14:52:24.211030+0000","last_peered":"2026-03-10T14:52:24.211030+0000","last_clean":"2026-03-10T14:52:24.211030+0000","last_became_active":"2026-03-10T14:51:52.742506+0000","last_became_peered":"2026-03-10T14:51:52.742506+0000","last_unstale":"2026-03-10T14:52:24.211030+0000","last_undegraded":"2026-03-10T14:52:24.211030+0000","last_fullsized":"2026-03-10T14:52:24.211030+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:01:08.458113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"65'11","reported_seq":50,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.791231+0000","last_change":"2026-03-10T14:51:56.939508+0000","last_active":"2026-03-10T14:52:57.791231+0000","last_peered":"2026-03-10T14:52:57.791231+0000","last_clean":"2026-03-10T14:52:57.791231+0000","last_became_active":"2026-03-10T14:51:56.939063+0000","last_became_peered":"2026-03-10T14:51:56.939063+0000","last_unstale":"2026-03-10T14:52:57.791231+0000","last_undegraded":"2026-03-10T14:52:57.791231+0000","last_fullsized":"2026-03-10T14:52:57.791231+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:25:20.989255+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213743+0000","last_change":"2026-03-10T14:51:58.945353+0000","last_active":"2026-03-10T14:52:24.213743+0000","last_peered":"2026-03-10T14:52:24.213743+0000","last_clean":"2026-03-10T14:52:24.213743+0000","last_became_active":"2026-03-10T14:51:58.945214+0000","last_became_peered":"2026-03-10T14:51:58.945214+0000","last_unstale":"2026-03-10T14:52:24.213743+0000","last_undegraded":"2026-03-10T14:52:24.213743+0000","last_fullsized":"2026-03-10T14:52:24.213743+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:55:30.387498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213939+0000","last_change":"2026-03-10T14:51:54.922190+0000","last_active":"2026-03-10T14:52:24.213939+0000","last_peered":"2026-03-10T14:52:24.213939+0000","last_clean":"2026-03-10T14:52:24.213939+0000","last_became_active":"2026-03-10T14:51:54.918400+0000","last_became_peered":"2026-03-10T14:51:54.918400+0000","last_unstale":"2026-03-10T14:52:24.213939+0000","last_undegraded":"2026-03-10T14:52:24.213939+0000","last_fullsized":"2026-03-10T14:52:24.213939+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:43:28.642723+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213899+0000","last_change":"2026-03-10T14:51:52.724303+0000","last_active":"2026-03-10T14:52:24.213899+0000","last_peered":"2026-03-10T14:52:24.213899+0000","last_clean":"2026-03-10T14:52:24.213899+0000","last_became_active":"2026-03-10T14:51:52.723665+0000","last_became_peered":"2026-03-10T14:51:52.723665+0000","last_unstale":"2026-03-10T14:52:24.213899+0000","last_undegraded":"2026-03-10T14:52:24.213899+0000","last_fullsized":"2026-03-10T14:52:24.213899+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:47:03.837440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793746+0000","last_change":"2026-03-10T14:51:56.942666+0000","last_active":"2026-03-10T14:53:02.793746+0000","last_peered":"2026-03-10T14:53:02.793746+0000","last_clean":"2026-03-10T14:53:02.793746+0000","last_became_active":"2026-03-10T14:51:56.942577+0000","last_became_peered":"2026-03-10T14:51:56.942577+0000","last_unstale":"2026-03-10T14:53:02.793746+0000","last_undegraded":"2026-03-10T14:53:02.793746+0000","last_fullsized":"2026-03-10T14:53:02.793746+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:56:48.261412+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329265+0000","last_change":"2026-03-10T14:51:58.951959+0000","last_active":"2026-03-10T14:52:24.329265+0000","last_peered":"2026-03-10T14:52:24.329265+0000","last_clean":"2026-03-10T14:52:24.329265+0000","last_became_active":"2026-03-10T14:51:58.951793+0000","last_became_peered":"2026-03-10T14:51:58.951793+0000","last_unstale":"2026-03-10T14:52:24.329265+0000","last_undegraded":"2026-03-10T14:52:24.329265+0000","last_fullsized":"2026-03-10T14:52:24.329265+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:57:06.637753+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263470+0000","last_change":"2026-03-10T14:51:54.928200+0000","last_active":"2026-03-10T14:52:25.263470+0000","last_peered":"2026-03-10T14:52:25.263470+0000","last_clean":"2026-03-10T14:52:25.263470+0000","last_became_active":"2026-03-10T14:51:54.927884+0000","last_became_peered":"2026-03-10T14:51:54.927884+0000","last_unstale":"2026-03-10T14:52:25.263470+0000","last_undegraded":"2026-03-10T14:52:25.263470+0000","last_fullsized":"2026-03-10T14:52:25.263470+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:28:45.465114+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242425+0000","last_change":"2026-03-10T14:51:52.739040+0000","last_active":"2026-03-10T14:52:25.242425+0000","last_peered":"2026-03-10T14:52:25.242425+0000","last_clean":"2026-03-10T14:52:25.242425+0000","last_became_active":"2026-03-10T14:51:52.738943+0000","last_became_peered":"2026-03-10T14:51:52.738943+0000","last_unstale":"2026-03-10T14:52:25.242425+0000","last_undegraded":"2026-03-10T14:52:25.242425+0000","last_fullsized":"2026-03-10T14:52:25.242425+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:51:48.451308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337151+0000","last_change":"2026-03-10T14:51:56.939741+0000","last_active":"2026-03-10T14:52:24.337151+0000","last_peered":"2026-03-10T14:52:24.337151+0000","last_clean":"2026-03-10T14:52:24.337151+0000","last_became_active":"2026-03-10T14:51:56.939616+0000","last_became_peered":"2026-03-10T14:51:56.939616+0000","last_unstale":"2026-03-10T14:52:24.337151+0000","last_undegraded":"2026-03-10T14:52:24.337151+0000","last_fullsized":"2026-03-10T14:52:24.337151+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:35:09.055952+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213610+0000","last_change":"2026-03-10T14:51:58.954126+0000","last_active":"2026-03-10T14:52:24.213610+0000","last_peered":"2026-03-10T14:52:24.213610+0000","last_clean":"2026-03-10T14:52:24.213610+0000","last_became_active":"2026-03-10T14:51:58.954028+0000","last_became_peered":"2026-03-10T14:51:58.954028+0000","last_unstale":"2026-03-10T14:52:24.213610+0000","last_undegraded":"2026-03-10T14:52:24.213610+0000","last_fullsized":"2026-03-10T14:52:24.213610+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:31.059158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328654+0000","last_change":"2026-03-10T14:51:54.926615+0000","last_active":"2026-03-10T14:52:24.328654+0000","last_peered":"2026-03-10T14:52:24.328654+0000","last_clean":"2026-03-10T14:52:24.328654+0000","last_became_active":"2026-03-10T14:51:54.926355+0000","last_became_peered":"2026-03-10T14:51:54.926355+0000","last_unstale":"2026-03-10T14:52:24.328654+0000","last_undegraded":"2026-03-10T14:52:24.328654+0000","last_fullsized":"2026-03-10T14:52:24.328654+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:22:51.944185+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"58'1","reported_seq":41,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212013+0000","last_change":"2026-03-10T14:51:52.720660+0000","last_active":"2026-03-10T14:52:24.212013+0000","last_peered":"2026-03-10T14:52:24.212013+0000","last_clean":"2026-03-10T14:52:24.212013+0000","last_became_active":"2026-03-10T14:51:52.720554+0000","last_became_peered":"2026-03-10T14:51:52.720554+0000","last_unstale":"2026-03-10T14:52:24.212013+0000","last_undegraded":"2026-03-10T14:52:24.212013+0000","last_fullsized":"2026-03-10T14:52:24.212013+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:00:19.554898+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212024+0000","last_change":"2026-03-10T14:51:56.938614+0000","last_active":"2026-03-10T14:52:24.212024+0000","last_peered":"2026-03-10T14:52:24.212024+0000","last_clean":"2026-03-10T14:52:24.212024+0000","last_became_active":"2026-03-10T14:51:56.937756+0000","last_became_peered":"2026-03-10T14:51:56.937756+0000","last_unstale":"2026-03-10T14:52:24.212024+0000","last_undegraded":"2026-03-10T14:52:24.212024+0000","last_fullsized":"2026-03-10T14:52:24.212024+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:11:14.188912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337083+0000","last_change":"2026-03-10T14:51:58.955142+0000","last_active":"2026-03-10T14:52:24.337083+0000","last_peered":"2026-03-10T14:52:24.337083+0000","last_clean":"2026-03-10T14:52:24.337083+0000","last_became_active":"2026-03-10T14:51:58.955049+0000","last_became_peered":"2026-03-10T14:51:58.955049+0000","last_unstale":"2026-03-10T14:52:24.337083+0000","last_undegraded":"2026-03-10T14:52:24.337083+0000","last_fullsized":"2026-03-10T14:52:24.337083+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:07:43.290853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"65'6","reported_seq":38,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214558+0000","last_change":"2026-03-10T14:51:54.926625+0000","last_active":"2026-03-10T14:52:24.214558+0000","last_peered":"2026-03-10T14:52:24.214558+0000","last_clean":"2026-03-10T14:52:24.214558+0000","last_became_active":"2026-03-10T14:51:54.926495+0000","last_became_peered":"2026-03-10T14:51:54.926495+0000","last_unstale":"2026-03-10T14:52:24.214558+0000","last_undegraded":"2026-03-10T14:52:24.214558+0000","last_fullsized":"2026-03-10T14:52:24.214558+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:42:10.784931+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210974+0000","last_change":"2026-03-10T14:51:52.739244+0000","last_active":"2026-03-10T14:52:24.210974+0000","last_peered":"2026-03-10T14:52:24.210974+0000","last_clean":"2026-03-10T14:52:24.210974+0000","last_became_active":"2026-03-10T14:51:52.739149+0000","last_became_peered":"2026-03-10T14:51:52.739149+0000","last_unstale":"2026-03-10T14:52:24.210974+0000","last_undegraded":"2026-03-10T14:52:24.210974+0000","last_fullsized":"2026-03-10T14:52:24.210974+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:45:26.078705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243012+0000","last_change":"2026-03-10T14:51:56.946808+0000","last_active":"2026-03-10T14:52:25.243012+0000","last_peered":"2026-03-10T14:52:25.243012+0000","last_clean":"2026-03-10T14:52:25.243012+0000","last_became_active":"2026-03-10T14:51:56.946011+0000","last_became_peered":"2026-03-10T14:51:56.946011+0000","last_unstale":"2026-03-10T14:52:25.243012+0000","last_undegraded":"2026-03-10T14:52:25.243012+0000","last_fullsized":"2026-03-10T14:52:25.243012+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:21:06.813207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.265024+0000","last_change":"2026-03-10T14:51:58.946120+0000","last_active":"2026-03-10T14:52:25.265024+0000","last_peered":"2026-03-10T14:52:25.265024+0000","last_clean":"2026-03-10T14:52:25.265024+0000","last_became_active":"2026-03-10T14:51:58.946041+0000","last_became_peered":"2026-03-10T14:51:58.946041+0000","last_unstale":"2026-03-10T14:52:25.265024+0000","last_undegraded":"2026-03-10T14:52:25.265024+0000","last_fullsized":"2026-03-10T14:52:25.265024+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:27:38.506440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210453+0000","last_change":"2026-03-10T14:51:54.923855+0000","last_active":"2026-03-10T14:52:24.210453+0000","last_peered":"2026-03-10T14:52:24.210453+0000","last_clean":"2026-03-10T14:52:24.210453+0000","last_became_active":"2026-03-10T14:51:54.923672+0000","last_became_peered":"2026-03-10T14:51:54.923672+0000","last_unstale":"2026-03-10T14:52:24.210453+0000","last_undegraded":"2026-03-10T14:52:24.210453+0000","last_fullsized":"2026-03-10T14:52:24.210453+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:14:18.149056+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242472+0000","last_change":"2026-03-10T14:51:52.736202+0000","last_active":"2026-03-10T14:52:25.242472+0000","last_peered":"2026-03-10T14:52:25.242472+0000","last_clean":"2026-03-10T14:52:25.242472+0000","last_became_active":"2026-03-10T14:51:52.735386+0000","last_became_peered":"2026-03-10T14:51:52.735386+0000","last_unstale":"2026-03-10T14:52:25.242472+0000","last_undegraded":"2026-03-10T14:52:25.242472+0000","last_fullsized":"2026-03-10T14:52:25.242472+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:13:54.152568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264630+0000","last_change":"2026-03-10T14:51:56.948999+0000","last_active":"2026-03-10T14:52:25.264630+0000","last_peered":"2026-03-10T14:52:25.264630+0000","last_clean":"2026-03-10T14:52:25.264630+0000","last_became_active":"2026-03-10T14:51:56.948543+0000","last_became_peered":"2026-03-10T14:51:56.948543+0000","last_unstale":"2026-03-10T14:52:25.264630+0000","last_undegraded":"2026-03-10T14:52:25.264630+0000","last_fullsized":"2026-03-10T14:52:25.264630+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:31:07.688481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.336966+0000","last_change":"2026-03-10T14:51:59.510070+0000","last_active":"2026-03-10T14:52:24.336966+0000","last_peered":"2026-03-10T14:52:24.336966+0000","last_clean":"2026-03-10T14:52:24.336966+0000","last_became_active":"2026-03-10T14:51:59.509938+0000","last_became_peered":"2026-03-10T14:51:59.509938+0000","last_unstale":"2026-03-10T14:52:24.336966+0000","last_undegraded":"2026-03-10T14:52:24.336966+0000","last_fullsized":"2026-03-10T14:52:24.336966+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:16:38.162429+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"65'1","reported_seq":23,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263985+0000","last_change":"2026-03-10T14:51:58.950284+0000","last_active":"2026-03-10T14:52:25.263985+0000","last_peered":"2026-03-10T14:52:25.263985+0000","last_clean":"2026-03-10T14:52:25.263985+0000","last_became_active":"2026-03-10T14:51:58.950124+0000","last_became_peered":"2026-03-10T14:51:58.950124+0000","last_unstale":"2026-03-10T14:52:25.263985+0000","last_undegraded":"2026-03-10T14:52:25.263985+0000","last_fullsized":"2026-03-10T14:52:25.263985+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:19:15.871427+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212430+0000","last_change":"2026-03-10T14:51:54.929255+0000","last_active":"2026-03-10T14:52:24.212430+0000","last_peered":"2026-03-10T14:52:24.212430+0000","last_clean":"2026-03-10T14:52:24.212430+0000","last_became_active":"2026-03-10T14:51:54.929131+0000","last_became_peered":"2026-03-10T14:51:54.929131+0000","last_unstale":"2026-03-10T14:52:24.212430+0000","last_undegraded":"2026-03-10T14:52:24.212430+0000","last_fullsized":"2026-03-10T14:52:24.212430+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:52:32.147475+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211221+0000","last_change":"2026-03-10T14:51:52.742783+0000","last_active":"2026-03-10T14:52:24.211221+0000","last_peered":"2026-03-10T14:52:24.211221+0000","last_clean":"2026-03-10T14:52:24.211221+0000","last_became_active":"2026-03-10T14:51:52.742661+0000","last_became_peered":"2026-03-10T14:51:52.742661+0000","last_unstale":"2026-03-10T14:52:24.211221+0000","last_undegraded":"2026-03-10T14:52:24.211221+0000","last_fullsized":"2026-03-10T14:52:24.211221+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:01.940073+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"65'11","reported_seq":53,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:57.791231+0000","last_change":"2026-03-10T14:51:56.946142+0000","last_active":"2026-03-10T14:52:57.791231+0000","last_peered":"2026-03-10T14:52:57.791231+0000","last_clean":"2026-03-10T14:52:57.791231+0000","last_became_active":"2026-03-10T14:51:56.945851+0000","last_became_peered":"2026-03-10T14:51:56.945851+0000","last_unstale":"2026-03-10T14:52:57.791231+0000","last_undegraded":"2026-03-10T14:52:57.791231+0000","last_fullsized":"2026-03-10T14:52:57.791231+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:58:08.033080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212502+0000","last_change":"2026-03-10T14:51:58.955036+0000","last_active":"2026-03-10T14:52:24.212502+0000","last_peered":"2026-03-10T14:52:24.212502+0000","last_clean":"2026-03-10T14:52:24.212502+0000","last_became_active":"2026-03-10T14:51:58.954950+0000","last_became_peered":"2026-03-10T14:51:58.954950+0000","last_unstale":"2026-03-10T14:52:24.212502+0000","last_undegraded":"2026-03-10T14:52:24.212502+0000","last_fullsized":"2026-03-10T14:52:24.212502+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:27.561203+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337332+0000","last_change":"2026-03-10T14:51:54.919597+0000","last_active":"2026-03-10T14:52:24.337332+0000","last_peered":"2026-03-10T14:52:24.337332+0000","last_clean":"2026-03-10T14:52:24.337332+0000","last_became_active":"2026-03-10T14:51:54.919323+0000","last_became_peered":"2026-03-10T14:51:54.919323+0000","last_unstale":"2026-03-10T14:52:24.337332+0000","last_undegraded":"2026-03-10T14:52:24.337332+0000","last_fullsized":"2026-03-10T14:52:24.337332+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:32:31.794997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"58'1","reported_seq":34,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337294+0000","last_change":"2026-03-10T14:51:52.731338+0000","last_active":"2026-03-10T14:52:24.337294+0000","last_peered":"2026-03-10T14:52:24.337294+0000","last_clean":"2026-03-10T14:52:24.337294+0000","last_became_active":"2026-03-10T14:51:52.731245+0000","last_became_peered":"2026-03-10T14:51:52.731245+0000","last_unstale":"2026-03-10T14:52:24.337294+0000","last_undegraded":"2026-03-10T14:52:24.337294+0000","last_fullsized":"2026-03-10T14:52:24.337294+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:49:16.856140+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214215+0000","last_change":"2026-03-10T14:51:56.938136+0000","last_active":"2026-03-10T14:52:24.214215+0000","last_peered":"2026-03-10T14:52:24.214215+0000","last_clean":"2026-03-10T14:52:24.214215+0000","last_became_active":"2026-03-10T14:51:56.938019+0000","last_became_peered":"2026-03-10T14:51:56.938019+0000","last_unstale":"2026-03-10T14:52:24.214215+0000","last_undegraded":"2026-03-10T14:52:24.214215+0000","last_fullsized":"2026-03-10T14:52:24.214215+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:58:46.309226+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328805+0000","last_change":"2026-03-10T14:51:59.509798+0000","last_active":"2026-03-10T14:52:24.328805+0000","last_peered":"2026-03-10T14:52:24.328805+0000","last_clean":"2026-03-10T14:52:24.328805+0000","last_became_active":"2026-03-10T14:51:59.509192+0000","last_became_peered":"2026-03-10T14:51:59.509192+0000","last_unstale":"2026-03-10T14:52:24.328805+0000","last_undegraded":"2026-03-10T14:52:24.328805+0000","last_fullsized":"2026-03-10T14:52:24.328805+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:21:23.147338+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"58'1","reported_seq":41,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242510+0000","last_change":"2026-03-10T14:51:52.742063+0000","last_active":"2026-03-10T14:52:25.242510+0000","last_peered":"2026-03-10T14:52:25.242510+0000","last_clean":"2026-03-10T14:52:25.242510+0000","last_became_active":"2026-03-10T14:51:52.741949+0000","last_became_peered":"2026-03-10T14:51:52.741949+0000","last_unstale":"2026-03-10T14:52:25.242510+0000","last_undegraded":"2026-03-10T14:52:25.242510+0000","last_fullsized":"2026-03-10T14:52:25.242510+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:09:42.932393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"65'5","reported_seq":39,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214604+0000","last_change":"2026-03-10T14:51:54.931242+0000","last_active":"2026-03-10T14:52:24.214604+0000","last_peered":"2026-03-10T14:52:24.214604+0000","last_clean":"2026-03-10T14:52:24.214604+0000","last_became_active":"2026-03-10T14:51:54.931165+0000","last_became_peered":"2026-03-10T14:51:54.931165+0000","last_unstale":"2026-03-10T14:52:24.214604+0000","last_undegraded":"2026-03-10T14:52:24.214604+0000","last_fullsized":"2026-03-10T14:52:24.214604+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:51:25.893354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212782+0000","last_change":"2026-03-10T14:51:56.949876+0000","last_active":"2026-03-10T14:52:24.212782+0000","last_peered":"2026-03-10T14:52:24.212782+0000","last_clean":"2026-03-10T14:52:24.212782+0000","last_became_active":"2026-03-10T14:51:56.949674+0000","last_became_peered":"2026-03-10T14:51:56.949674+0000","last_unstale":"2026-03-10T14:52:24.212782+0000","last_undegraded":"2026-03-10T14:52:24.212782+0000","last_fullsized":"2026-03-10T14:52:24.212782+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:25:29.989509+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337639+0000","last_change":"2026-03-10T14:51:59.513207+0000","last_active":"2026-03-10T14:52:24.337639+0000","last_peered":"2026-03-10T14:52:24.337639+0000","last_clean":"2026-03-10T14:52:24.337639+0000","last_became_active":"2026-03-10T14:51:59.513111+0000","last_became_peered":"2026-03-10T14:51:59.513111+0000","last_unstale":"2026-03-10T14:52:24.337639+0000","last_undegraded":"2026-03-10T14:52:24.337639+0000","last_fullsized":"2026-03-10T14:52:24.337639+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:16:20.305949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337682+0000","last_change":"2026-03-10T14:51:52.748824+0000","last_active":"2026-03-10T14:52:24.337682+0000","last_peered":"2026-03-10T14:52:24.337682+0000","last_clean":"2026-03-10T14:52:24.337682+0000","last_became_active":"2026-03-10T14:51:52.748433+0000","last_became_peered":"2026-03-10T14:51:52.748433+0000","last_unstale":"2026-03-10T14:52:24.337682+0000","last_undegraded":"2026-03-10T14:52:24.337682+0000","last_fullsized":"2026-03-10T14:52:24.337682+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:23:21.911199+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329587+0000","last_change":"2026-03-10T14:51:54.926723+0000","last_active":"2026-03-10T14:52:24.329587+0000","last_peered":"2026-03-10T14:52:24.329587+0000","last_clean":"2026-03-10T14:52:24.329587+0000","last_became_active":"2026-03-10T14:51:54.926582+0000","last_became_peered":"2026-03-10T14:51:54.926582+0000","last_unstale":"2026-03-10T14:52:24.329587+0000","last_undegraded":"2026-03-10T14:52:24.329587+0000","last_fullsized":"2026-03-10T14:52:24.329587+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:25:02.502706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329551+0000","last_change":"2026-03-10T14:51:56.942920+0000","last_active":"2026-03-10T14:52:24.329551+0000","last_peered":"2026-03-10T14:52:24.329551+0000","last_clean":"2026-03-10T14:52:24.329551+0000","last_became_active":"2026-03-10T14:51:56.942057+0000","last_became_peered":"2026-03-10T14:51:56.942057+0000","last_unstale":"2026-03-10T14:52:24.329551+0000","last_undegraded":"2026-03-10T14:52:24.329551+0000","last_fullsized":"2026-03-10T14:52:24.329551+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:28:45.685672+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":54,"seq":231928234004,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27984,"kb_used_data":1148,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939440,"statfs":{"total":21470642176,"available":21441986560,"internally_reserved":0,"allocated":1175552,"data_stored":724699,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":47,"seq":201863462941,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27948,"kb_used_data":1116,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939476,"statfs":{"total":21470642176,"available":21442023424,"internally_reserved":0,"allocated":1142784,"data_stored":723344,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":39,"seq":167503724578,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27508,"kb_used_data":668,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939916,"statfs":{"total":21470642176,"available":21442473984,"internally_reserved":0,"allocated":684032,"data_stored":265023,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953514,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27548,"kb_used_data":708,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939876,"statfs":{"total":21470642176,"available":21442433024,"internally_reserved":0,"allocated":724992,"data_stored":265022,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149745,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":664,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":679936,"data_stored":264028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411383,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":664,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":679936,"data_stored":264121,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574910,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27520,"kb_used_data":680,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939904,"statfs":{"total":21470642176,"available":21442461696,"internally_reserved":0,"allocated":696320,"data_stored":264952,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27964,"kb_used_data":1132,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939460,"statfs":{"total":21470642176,"available":21442007040,"internally_reserved":0,"allocated":1159168,"data_stored":724556,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T14:53:12.091 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph pg dump --format=json 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:13 vm00 bash[20726]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:13 vm00 bash[20726]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:13 vm00 bash[20726]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:13 vm00 bash[20726]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:13 vm00 bash[28403]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:13 vm00 bash[28403]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.769 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:13 vm00 bash[28403]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:13.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:13 vm00 bash[28403]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:13 vm03 bash[23394]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:13 vm03 bash[23394]: audit 2026-03-10T14:53:12.022922+0000 mgr.y (mgr.24425) 63 : audit [DBG] from='client.24545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:13 vm03 bash[23394]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:13 vm03 bash[23394]: cluster 2026-03-10T14:53:12.238869+0000 mgr.y (mgr.24425) 64 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:14.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:15.794 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:15.813 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:15 vm00 bash[20726]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:15.813 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:15 vm00 bash[20726]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:15.813 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:15 vm00 bash[28403]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:15.813 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:15 vm00 bash[28403]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:15 vm03 bash[23394]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:15 vm03 bash[23394]: cluster 2026-03-10T14:53:14.239229+0000 mgr.y (mgr.24425) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:53:16.076 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:53:16.078 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T14:53:16.161 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":28,"stamp":"2026-03-10T14:53:14.239033+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":911,"num_read_kb":770,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221480,"kb_used_data":6780,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167517912,"statfs":{"total":171765137408,"available":171538341888,"internally_reserved":0,"allocated":6942720,"data_stored":3495745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":9,"num_read_kb":9,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002431"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337947+0000","last_change":"2026-03-10T14:51:59.510313+0000","last_active":"2026-03-10T14:52:24.337947+0000","last_peered":"2026-03-10T14:52:24.337947+0000","last_clean":"2026-03-10T14:52:24.337947+0000","last_became_active":"2026-03-10T14:51:59.510232+0000","last_became_peered":"2026-03-10T14:51:59.510232+0000","last_unstale":"2026-03-10T14:52:24.337947+0000","last_undegraded":"2026-03-10T14:52:24.337947+0000","last_fullsized":"2026-03-10T14:52:24.337947+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:03:24.096221+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213792+0000","last_change":"2026-03-10T14:51:52.748444+0000","last_active":"2026-03-10T14:52:24.213792+0000","last_peered":"2026-03-10T14:52:24.213792+0000","last_clean":"2026-03-10T14:52:24.213792+0000","last_became_active":"2026-03-10T14:51:52.748359+0000","last_became_peered":"2026-03-10T14:51:52.748359+0000","last_unstale":"2026-03-10T14:52:24.213792+0000","last_undegraded":"2026-03-10T14:52:24.213792+0000","last_fullsized":"2026-03-10T14:52:24.213792+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:13:48.867889+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.338107+0000","last_change":"2026-03-10T14:51:54.924877+0000","last_active":"2026-03-10T14:52:24.338107+0000","last_peered":"2026-03-10T14:52:24.338107+0000","last_clean":"2026-03-10T14:52:24.338107+0000","last_became_active":"2026-03-10T14:51:54.924720+0000","last_became_peered":"2026-03-10T14:51:54.924720+0000","last_unstale":"2026-03-10T14:52:24.338107+0000","last_undegraded":"2026-03-10T14:52:24.338107+0000","last_fullsized":"2026-03-10T14:52:24.338107+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:47:34.514256+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329046+0000","last_change":"2026-03-10T14:51:56.944363+0000","last_active":"2026-03-10T14:52:24.329046+0000","last_peered":"2026-03-10T14:52:24.329046+0000","last_clean":"2026-03-10T14:52:24.329046+0000","last_became_active":"2026-03-10T14:51:56.943948+0000","last_became_peered":"2026-03-10T14:51:56.943948+0000","last_unstale":"2026-03-10T14:52:24.329046+0000","last_undegraded":"2026-03-10T14:52:24.329046+0000","last_fullsized":"2026-03-10T14:52:24.329046+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:33:25.301504+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337412+0000","last_change":"2026-03-10T14:51:52.731392+0000","last_active":"2026-03-10T14:52:24.337412+0000","last_peered":"2026-03-10T14:52:24.337412+0000","last_clean":"2026-03-10T14:52:24.337412+0000","last_became_active":"2026-03-10T14:51:52.731313+0000","last_became_peered":"2026-03-10T14:51:52.731313+0000","last_unstale":"2026-03-10T14:52:24.337412+0000","last_undegraded":"2026-03-10T14:52:24.337412+0000","last_fullsized":"2026-03-10T14:52:24.337412+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:21:58.842652+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214288+0000","last_change":"2026-03-10T14:51:54.926737+0000","last_active":"2026-03-10T14:52:24.214288+0000","last_peered":"2026-03-10T14:52:24.214288+0000","last_clean":"2026-03-10T14:52:24.214288+0000","last_became_active":"2026-03-10T14:51:54.926418+0000","last_became_peered":"2026-03-10T14:51:54.926418+0000","last_unstale":"2026-03-10T14:52:24.214288+0000","last_undegraded":"2026-03-10T14:52:24.214288+0000","last_fullsized":"2026-03-10T14:52:24.214288+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:42:57.575249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212742+0000","last_change":"2026-03-10T14:51:56.946648+0000","last_active":"2026-03-10T14:52:24.212742+0000","last_peered":"2026-03-10T14:52:24.212742+0000","last_clean":"2026-03-10T14:52:24.212742+0000","last_became_active":"2026-03-10T14:51:56.946448+0000","last_became_peered":"2026-03-10T14:51:56.946448+0000","last_unstale":"2026-03-10T14:52:24.212742+0000","last_undegraded":"2026-03-10T14:52:24.212742+0000","last_fullsized":"2026-03-10T14:52:24.212742+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:27:44.047187+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328528+0000","last_change":"2026-03-10T14:51:58.950929+0000","last_active":"2026-03-10T14:52:24.328528+0000","last_peered":"2026-03-10T14:52:24.328528+0000","last_clean":"2026-03-10T14:52:24.328528+0000","last_became_active":"2026-03-10T14:51:58.950758+0000","last_became_peered":"2026-03-10T14:51:58.950758+0000","last_unstale":"2026-03-10T14:52:24.328528+0000","last_undegraded":"2026-03-10T14:52:24.328528+0000","last_fullsized":"2026-03-10T14:52:24.328528+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:13:31.981592+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.262966+0000","last_change":"2026-03-10T14:51:52.749960+0000","last_active":"2026-03-10T14:52:25.262966+0000","last_peered":"2026-03-10T14:52:25.262966+0000","last_clean":"2026-03-10T14:52:25.262966+0000","last_became_active":"2026-03-10T14:51:52.749751+0000","last_became_peered":"2026-03-10T14:51:52.749751+0000","last_unstale":"2026-03-10T14:52:25.262966+0000","last_undegraded":"2026-03-10T14:52:25.262966+0000","last_fullsized":"2026-03-10T14:52:25.262966+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:38:35.842261+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211342+0000","last_change":"2026-03-10T14:51:54.930411+0000","last_active":"2026-03-10T14:52:24.211342+0000","last_peered":"2026-03-10T14:52:24.211342+0000","last_clean":"2026-03-10T14:52:24.211342+0000","last_became_active":"2026-03-10T14:51:54.930337+0000","last_became_peered":"2026-03-10T14:51:54.930337+0000","last_unstale":"2026-03-10T14:52:24.211342+0000","last_undegraded":"2026-03-10T14:52:24.211342+0000","last_fullsized":"2026-03-10T14:52:24.211342+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:42:25.792907+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264391+0000","last_change":"2026-03-10T14:51:56.948720+0000","last_active":"2026-03-10T14:52:25.264391+0000","last_peered":"2026-03-10T14:52:25.264391+0000","last_clean":"2026-03-10T14:52:25.264391+0000","last_became_active":"2026-03-10T14:51:56.948586+0000","last_became_peered":"2026-03-10T14:51:56.948586+0000","last_unstale":"2026-03-10T14:52:25.264391+0000","last_undegraded":"2026-03-10T14:52:25.264391+0000","last_fullsized":"2026-03-10T14:52:25.264391+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:59:29.559689+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211315+0000","last_change":"2026-03-10T14:51:58.952934+0000","last_active":"2026-03-10T14:52:24.211315+0000","last_peered":"2026-03-10T14:52:24.211315+0000","last_clean":"2026-03-10T14:52:24.211315+0000","last_became_active":"2026-03-10T14:51:58.952483+0000","last_became_peered":"2026-03-10T14:51:58.952483+0000","last_unstale":"2026-03-10T14:52:24.211315+0000","last_undegraded":"2026-03-10T14:52:24.211315+0000","last_fullsized":"2026-03-10T14:52:24.211315+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:22:30.851807+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263077+0000","last_change":"2026-03-10T14:51:52.745241+0000","last_active":"2026-03-10T14:52:25.263077+0000","last_peered":"2026-03-10T14:52:25.263077+0000","last_clean":"2026-03-10T14:52:25.263077+0000","last_became_active":"2026-03-10T14:51:52.741320+0000","last_became_peered":"2026-03-10T14:51:52.741320+0000","last_unstale":"2026-03-10T14:52:25.263077+0000","last_undegraded":"2026-03-10T14:52:25.263077+0000","last_fullsized":"2026-03-10T14:52:25.263077+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:11:20.736130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"65'12","reported_seq":52,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210550+0000","last_change":"2026-03-10T14:51:54.933735+0000","last_active":"2026-03-10T14:52:24.210550+0000","last_peered":"2026-03-10T14:52:24.210550+0000","last_clean":"2026-03-10T14:52:24.210550+0000","last_became_active":"2026-03-10T14:51:54.933207+0000","last_became_peered":"2026-03-10T14:51:54.933207+0000","last_unstale":"2026-03-10T14:52:24.210550+0000","last_undegraded":"2026-03-10T14:52:24.210550+0000","last_fullsized":"2026-03-10T14:52:24.210550+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:27:21.133155+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210496+0000","last_change":"2026-03-10T14:51:56.939450+0000","last_active":"2026-03-10T14:52:24.210496+0000","last_peered":"2026-03-10T14:52:24.210496+0000","last_clean":"2026-03-10T14:52:24.210496+0000","last_became_active":"2026-03-10T14:51:56.939338+0000","last_became_peered":"2026-03-10T14:51:56.939338+0000","last_unstale":"2026-03-10T14:52:24.210496+0000","last_undegraded":"2026-03-10T14:52:24.210496+0000","last_fullsized":"2026-03-10T14:52:24.210496+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:22:14.360831+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213696+0000","last_change":"2026-03-10T14:51:58.945479+0000","last_active":"2026-03-10T14:52:24.213696+0000","last_peered":"2026-03-10T14:52:24.213696+0000","last_clean":"2026-03-10T14:52:24.213696+0000","last_became_active":"2026-03-10T14:51:58.945393+0000","last_became_peered":"2026-03-10T14:51:58.945393+0000","last_unstale":"2026-03-10T14:52:24.213696+0000","last_undegraded":"2026-03-10T14:52:24.213696+0000","last_fullsized":"2026-03-10T14:52:24.213696+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:09:12.837014+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"65'19","reported_seq":60,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243184+0000","last_change":"2026-03-10T14:51:54.941869+0000","last_active":"2026-03-10T14:52:25.243184+0000","last_peered":"2026-03-10T14:52:25.243184+0000","last_clean":"2026-03-10T14:52:25.243184+0000","last_became_active":"2026-03-10T14:51:54.941693+0000","last_became_peered":"2026-03-10T14:51:54.941693+0000","last_unstale":"2026-03-10T14:52:25.243184+0000","last_undegraded":"2026-03-10T14:52:25.243184+0000","last_fullsized":"2026-03-10T14:52:25.243184+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:37:09.120561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263195+0000","last_change":"2026-03-10T14:51:52.746413+0000","last_active":"2026-03-10T14:52:25.263195+0000","last_peered":"2026-03-10T14:52:25.263195+0000","last_clean":"2026-03-10T14:52:25.263195+0000","last_became_active":"2026-03-10T14:51:52.746308+0000","last_became_peered":"2026-03-10T14:51:52.746308+0000","last_unstale":"2026-03-10T14:52:25.263195+0000","last_undegraded":"2026-03-10T14:52:25.263195+0000","last_fullsized":"2026-03-10T14:52:25.263195+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:13.250651+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212670+0000","last_change":"2026-03-10T14:51:56.948114+0000","last_active":"2026-03-10T14:52:24.212670+0000","last_peered":"2026-03-10T14:52:24.212670+0000","last_clean":"2026-03-10T14:52:24.212670+0000","last_became_active":"2026-03-10T14:51:56.947831+0000","last_became_peered":"2026-03-10T14:51:56.947831+0000","last_unstale":"2026-03-10T14:52:24.212670+0000","last_undegraded":"2026-03-10T14:52:24.212670+0000","last_fullsized":"2026-03-10T14:52:24.212670+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:14:03.965576+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194790+0000","last_change":"2026-03-10T14:51:58.948870+0000","last_active":"2026-03-10T14:52:25.194790+0000","last_peered":"2026-03-10T14:52:25.194790+0000","last_clean":"2026-03-10T14:52:25.194790+0000","last_became_active":"2026-03-10T14:51:58.948654+0000","last_became_peered":"2026-03-10T14:51:58.948654+0000","last_unstale":"2026-03-10T14:52:25.194790+0000","last_undegraded":"2026-03-10T14:52:25.194790+0000","last_fullsized":"2026-03-10T14:52:25.194790+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:48:58.479864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337525+0000","last_change":"2026-03-10T14:51:54.930764+0000","last_active":"2026-03-10T14:52:24.337525+0000","last_peered":"2026-03-10T14:52:24.337525+0000","last_clean":"2026-03-10T14:52:24.337525+0000","last_became_active":"2026-03-10T14:51:54.930199+0000","last_became_peered":"2026-03-10T14:51:54.930199+0000","last_unstale":"2026-03-10T14:52:24.337525+0000","last_undegraded":"2026-03-10T14:52:24.337525+0000","last_fullsized":"2026-03-10T14:52:24.337525+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:08.203364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211853+0000","last_change":"2026-03-10T14:51:52.748094+0000","last_active":"2026-03-10T14:52:24.211853+0000","last_peered":"2026-03-10T14:52:24.211853+0000","last_clean":"2026-03-10T14:52:24.211853+0000","last_became_active":"2026-03-10T14:51:52.748012+0000","last_became_peered":"2026-03-10T14:51:52.748012+0000","last_unstale":"2026-03-10T14:52:24.211853+0000","last_undegraded":"2026-03-10T14:52:24.211853+0000","last_fullsized":"2026-03-10T14:52:24.211853+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:59.562570+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793502+0000","last_change":"2026-03-10T14:51:56.941162+0000","last_active":"2026-03-10T14:53:02.793502+0000","last_peered":"2026-03-10T14:53:02.793502+0000","last_clean":"2026-03-10T14:53:02.793502+0000","last_became_active":"2026-03-10T14:51:56.940757+0000","last_became_peered":"2026-03-10T14:51:56.940757+0000","last_unstale":"2026-03-10T14:53:02.793502+0000","last_undegraded":"2026-03-10T14:53:02.793502+0000","last_fullsized":"2026-03-10T14:53:02.793502+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:53:46.217768+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329820+0000","last_change":"2026-03-10T14:51:58.942231+0000","last_active":"2026-03-10T14:52:24.329820+0000","last_peered":"2026-03-10T14:52:24.329820+0000","last_clean":"2026-03-10T14:52:24.329820+0000","last_became_active":"2026-03-10T14:51:58.941991+0000","last_became_peered":"2026-03-10T14:51:58.941991+0000","last_unstale":"2026-03-10T14:52:24.329820+0000","last_undegraded":"2026-03-10T14:52:24.329820+0000","last_fullsized":"2026-03-10T14:52:24.329820+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:56:44.140614+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337580+0000","last_change":"2026-03-10T14:51:54.919825+0000","last_active":"2026-03-10T14:52:24.337580+0000","last_peered":"2026-03-10T14:52:24.337580+0000","last_clean":"2026-03-10T14:52:24.337580+0000","last_became_active":"2026-03-10T14:51:54.919644+0000","last_became_peered":"2026-03-10T14:51:54.919644+0000","last_unstale":"2026-03-10T14:52:24.337580+0000","last_undegraded":"2026-03-10T14:52:24.337580+0000","last_fullsized":"2026-03-10T14:52:24.337580+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:42:44.634847+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211884+0000","last_change":"2026-03-10T14:51:52.743088+0000","last_active":"2026-03-10T14:52:24.211884+0000","last_peered":"2026-03-10T14:52:24.211884+0000","last_clean":"2026-03-10T14:52:24.211884+0000","last_became_active":"2026-03-10T14:51:52.742955+0000","last_became_peered":"2026-03-10T14:51:52.742955+0000","last_unstale":"2026-03-10T14:52:24.211884+0000","last_undegraded":"2026-03-10T14:52:24.211884+0000","last_fullsized":"2026-03-10T14:52:24.211884+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:37:32.602079+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791188+0000","last_change":"2026-03-10T14:51:56.945138+0000","last_active":"2026-03-10T14:53:02.791188+0000","last_peered":"2026-03-10T14:53:02.791188+0000","last_clean":"2026-03-10T14:53:02.791188+0000","last_became_active":"2026-03-10T14:51:56.944465+0000","last_became_peered":"2026-03-10T14:51:56.944465+0000","last_unstale":"2026-03-10T14:53:02.791188+0000","last_undegraded":"2026-03-10T14:53:02.791188+0000","last_fullsized":"2026-03-10T14:53:02.791188+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:15:38.509873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210186+0000","last_change":"2026-03-10T14:51:58.952345+0000","last_active":"2026-03-10T14:52:24.210186+0000","last_peered":"2026-03-10T14:52:24.210186+0000","last_clean":"2026-03-10T14:52:24.210186+0000","last_became_active":"2026-03-10T14:51:58.951477+0000","last_became_peered":"2026-03-10T14:51:58.951477+0000","last_unstale":"2026-03-10T14:52:24.210186+0000","last_undegraded":"2026-03-10T14:52:24.210186+0000","last_fullsized":"2026-03-10T14:52:24.210186+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:55.833148+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"65'12","reported_seq":52,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328706+0000","last_change":"2026-03-10T14:51:54.927009+0000","last_active":"2026-03-10T14:52:24.328706+0000","last_peered":"2026-03-10T14:52:24.328706+0000","last_clean":"2026-03-10T14:52:24.328706+0000","last_became_active":"2026-03-10T14:51:54.926904+0000","last_became_peered":"2026-03-10T14:51:54.926904+0000","last_unstale":"2026-03-10T14:52:24.328706+0000","last_undegraded":"2026-03-10T14:52:24.328706+0000","last_fullsized":"2026-03-10T14:52:24.328706+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:08:30.966606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263231+0000","last_change":"2026-03-10T14:51:52.748200+0000","last_active":"2026-03-10T14:52:25.263231+0000","last_peered":"2026-03-10T14:52:25.263231+0000","last_clean":"2026-03-10T14:52:25.263231+0000","last_became_active":"2026-03-10T14:51:52.748100+0000","last_became_peered":"2026-03-10T14:51:52.748100+0000","last_unstale":"2026-03-10T14:52:25.263231+0000","last_undegraded":"2026-03-10T14:52:25.263231+0000","last_fullsized":"2026-03-10T14:52:25.263231+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:21:43.252153+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211550+0000","last_change":"2026-03-10T14:51:56.939022+0000","last_active":"2026-03-10T14:52:24.211550+0000","last_peered":"2026-03-10T14:52:24.211550+0000","last_clean":"2026-03-10T14:52:24.211550+0000","last_became_active":"2026-03-10T14:51:56.938004+0000","last_became_peered":"2026-03-10T14:51:56.938004+0000","last_unstale":"2026-03-10T14:52:24.211550+0000","last_undegraded":"2026-03-10T14:52:24.211550+0000","last_fullsized":"2026-03-10T14:52:24.211550+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:13.390551+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.336970+0000","last_change":"2026-03-10T14:51:59.513685+0000","last_active":"2026-03-10T14:52:24.336970+0000","last_peered":"2026-03-10T14:52:24.336970+0000","last_clean":"2026-03-10T14:52:24.336970+0000","last_became_active":"2026-03-10T14:51:59.513553+0000","last_became_peered":"2026-03-10T14:51:59.513553+0000","last_unstale":"2026-03-10T14:52:24.336970+0000","last_undegraded":"2026-03-10T14:52:24.336970+0000","last_fullsized":"2026-03-10T14:52:24.336970+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:08:16.694200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"65'12","reported_seq":47,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214518+0000","last_change":"2026-03-10T14:51:54.931786+0000","last_active":"2026-03-10T14:52:24.214518+0000","last_peered":"2026-03-10T14:52:24.214518+0000","last_clean":"2026-03-10T14:52:24.214518+0000","last_became_active":"2026-03-10T14:51:54.931647+0000","last_became_peered":"2026-03-10T14:51:54.931647+0000","last_unstale":"2026-03-10T14:52:24.214518+0000","last_undegraded":"2026-03-10T14:52:24.214518+0000","last_fullsized":"2026-03-10T14:52:24.214518+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:07:39.349991+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243323+0000","last_change":"2026-03-10T14:51:52.741992+0000","last_active":"2026-03-10T14:52:25.243323+0000","last_peered":"2026-03-10T14:52:25.243323+0000","last_clean":"2026-03-10T14:52:25.243323+0000","last_became_active":"2026-03-10T14:51:52.741785+0000","last_became_peered":"2026-03-10T14:51:52.741785+0000","last_unstale":"2026-03-10T14:52:25.243323+0000","last_undegraded":"2026-03-10T14:52:25.243323+0000","last_fullsized":"2026-03-10T14:52:25.243323+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:19:06.234347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"65'1","reported_seq":35,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329766+0000","last_change":"2026-03-10T14:52:01.993419+0000","last_active":"2026-03-10T14:52:24.329766+0000","last_peered":"2026-03-10T14:52:24.329766+0000","last_clean":"2026-03-10T14:52:24.329766+0000","last_became_active":"2026-03-10T14:51:55.924128+0000","last_became_peered":"2026-03-10T14:51:55.924128+0000","last_unstale":"2026-03-10T14:52:24.329766+0000","last_undegraded":"2026-03-10T14:52:24.329766+0000","last_fullsized":"2026-03-10T14:52:24.329766+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:36.513075+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025000700000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793596+0000","last_change":"2026-03-10T14:51:56.939092+0000","last_active":"2026-03-10T14:53:02.793596+0000","last_peered":"2026-03-10T14:53:02.793596+0000","last_clean":"2026-03-10T14:53:02.793596+0000","last_became_active":"2026-03-10T14:51:56.938931+0000","last_became_peered":"2026-03-10T14:51:56.938931+0000","last_unstale":"2026-03-10T14:53:02.793596+0000","last_undegraded":"2026-03-10T14:53:02.793596+0000","last_fullsized":"2026-03-10T14:53:02.793596+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:30:37.620150+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263735+0000","last_change":"2026-03-10T14:51:59.508728+0000","last_active":"2026-03-10T14:52:25.263735+0000","last_peered":"2026-03-10T14:52:25.263735+0000","last_clean":"2026-03-10T14:52:25.263735+0000","last_became_active":"2026-03-10T14:51:59.508529+0000","last_became_peered":"2026-03-10T14:51:59.508529+0000","last_unstale":"2026-03-10T14:52:25.263735+0000","last_undegraded":"2026-03-10T14:52:25.263735+0000","last_fullsized":"2026-03-10T14:52:25.263735+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:36.680680+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"65'13","reported_seq":56,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337873+0000","last_change":"2026-03-10T14:51:54.918043+0000","last_active":"2026-03-10T14:52:24.337873+0000","last_peered":"2026-03-10T14:52:24.337873+0000","last_clean":"2026-03-10T14:52:24.337873+0000","last_became_active":"2026-03-10T14:51:54.917933+0000","last_became_peered":"2026-03-10T14:51:54.917933+0000","last_unstale":"2026-03-10T14:52:24.337873+0000","last_undegraded":"2026-03-10T14:52:24.337873+0000","last_fullsized":"2026-03-10T14:52:24.337873+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:05:08.995873+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"58'1","reported_seq":34,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211927+0000","last_change":"2026-03-10T14:51:52.733363+0000","last_active":"2026-03-10T14:52:24.211927+0000","last_peered":"2026-03-10T14:52:24.211927+0000","last_clean":"2026-03-10T14:52:24.211927+0000","last_became_active":"2026-03-10T14:51:52.733222+0000","last_became_peered":"2026-03-10T14:51:52.733222+0000","last_unstale":"2026-03-10T14:52:24.211927+0000","last_undegraded":"2026-03-10T14:52:24.211927+0000","last_fullsized":"2026-03-10T14:52:24.211927+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:59:28.187606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"67'5","reported_seq":104,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:08.565113+0000","last_change":"2026-03-10T14:52:02.062549+0000","last_active":"2026-03-10T14:53:08.565113+0000","last_peered":"2026-03-10T14:53:08.565113+0000","last_clean":"2026-03-10T14:53:08.565113+0000","last_became_active":"2026-03-10T14:51:55.924328+0000","last_became_peered":"2026-03-10T14:51:55.924328+0000","last_unstale":"2026-03-10T14:53:08.565113+0000","last_undegraded":"2026-03-10T14:53:08.565113+0000","last_fullsized":"2026-03-10T14:53:08.565113+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:12.007760+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000314672,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329365+0000","last_change":"2026-03-10T14:51:56.943429+0000","last_active":"2026-03-10T14:52:24.329365+0000","last_peered":"2026-03-10T14:52:24.329365+0000","last_clean":"2026-03-10T14:52:24.329365+0000","last_became_active":"2026-03-10T14:51:56.943195+0000","last_became_peered":"2026-03-10T14:51:56.943195+0000","last_unstale":"2026-03-10T14:52:24.329365+0000","last_undegraded":"2026-03-10T14:52:24.329365+0000","last_fullsized":"2026-03-10T14:52:24.329365+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:56:39.901791+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329327+0000","last_change":"2026-03-10T14:51:58.942318+0000","last_active":"2026-03-10T14:52:24.329327+0000","last_peered":"2026-03-10T14:52:24.329327+0000","last_clean":"2026-03-10T14:52:24.329327+0000","last_became_active":"2026-03-10T14:51:58.942153+0000","last_became_peered":"2026-03-10T14:51:58.942153+0000","last_unstale":"2026-03-10T14:52:24.329327+0000","last_undegraded":"2026-03-10T14:52:24.329327+0000","last_fullsized":"2026-03-10T14:52:24.329327+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:08:33.354217+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"65'30","reported_seq":95,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.794050+0000","last_change":"2026-03-10T14:51:54.934244+0000","last_active":"2026-03-10T14:53:02.794050+0000","last_peered":"2026-03-10T14:53:02.794050+0000","last_clean":"2026-03-10T14:53:02.794050+0000","last_became_active":"2026-03-10T14:51:54.934074+0000","last_became_peered":"2026-03-10T14:51:54.934074+0000","last_unstale":"2026-03-10T14:53:02.794050+0000","last_undegraded":"2026-03-10T14:53:02.794050+0000","last_fullsized":"2026-03-10T14:53:02.794050+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:08:46.795398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263265+0000","last_change":"2026-03-10T14:51:52.747373+0000","last_active":"2026-03-10T14:52:25.263265+0000","last_peered":"2026-03-10T14:52:25.263265+0000","last_clean":"2026-03-10T14:52:25.263265+0000","last_became_active":"2026-03-10T14:51:52.747060+0000","last_became_peered":"2026-03-10T14:51:52.747060+0000","last_unstale":"2026-03-10T14:52:25.263265+0000","last_undegraded":"2026-03-10T14:52:25.263265+0000","last_fullsized":"2026-03-10T14:52:25.263265+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:15:26.083583+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243562+0000","last_change":"2026-03-10T14:51:56.952131+0000","last_active":"2026-03-10T14:52:25.243562+0000","last_peered":"2026-03-10T14:52:25.243562+0000","last_clean":"2026-03-10T14:52:25.243562+0000","last_became_active":"2026-03-10T14:51:56.952035+0000","last_became_peered":"2026-03-10T14:51:56.952035+0000","last_unstale":"2026-03-10T14:52:25.243562+0000","last_undegraded":"2026-03-10T14:52:25.243562+0000","last_fullsized":"2026-03-10T14:52:25.243562+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:07:25.644632+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212310+0000","last_change":"2026-03-10T14:51:59.466645+0000","last_active":"2026-03-10T14:52:24.212310+0000","last_peered":"2026-03-10T14:52:24.212310+0000","last_clean":"2026-03-10T14:52:24.212310+0000","last_became_active":"2026-03-10T14:51:59.465974+0000","last_became_peered":"2026-03-10T14:51:59.465974+0000","last_unstale":"2026-03-10T14:52:24.212310+0000","last_undegraded":"2026-03-10T14:52:24.212310+0000","last_fullsized":"2026-03-10T14:52:24.212310+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:28:00.892473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"65'16","reported_seq":67,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791309+0000","last_change":"2026-03-10T14:51:54.930086+0000","last_active":"2026-03-10T14:53:02.791309+0000","last_peered":"2026-03-10T14:53:02.791309+0000","last_clean":"2026-03-10T14:53:02.791309+0000","last_became_active":"2026-03-10T14:51:54.925263+0000","last_became_peered":"2026-03-10T14:51:54.925263+0000","last_unstale":"2026-03-10T14:53:02.791309+0000","last_undegraded":"2026-03-10T14:53:02.791309+0000","last_fullsized":"2026-03-10T14:53:02.791309+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:23:48.577488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211960+0000","last_change":"2026-03-10T14:51:52.729263+0000","last_active":"2026-03-10T14:52:24.211960+0000","last_peered":"2026-03-10T14:52:24.211960+0000","last_clean":"2026-03-10T14:52:24.211960+0000","last_became_active":"2026-03-10T14:51:52.729190+0000","last_became_peered":"2026-03-10T14:51:52.729190+0000","last_unstale":"2026-03-10T14:52:24.211960+0000","last_undegraded":"2026-03-10T14:52:24.211960+0000","last_fullsized":"2026-03-10T14:52:24.211960+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:36:34.378261+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"67'2","reported_seq":36,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211985+0000","last_change":"2026-03-10T14:52:02.065640+0000","last_active":"2026-03-10T14:52:24.211985+0000","last_peered":"2026-03-10T14:52:24.211985+0000","last_clean":"2026-03-10T14:52:24.211985+0000","last_became_active":"2026-03-10T14:51:55.930421+0000","last_became_peered":"2026-03-10T14:51:55.930421+0000","last_unstale":"2026-03-10T14:52:24.211985+0000","last_undegraded":"2026-03-10T14:52:24.211985+0000","last_fullsized":"2026-03-10T14:52:24.211985+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:54.898851+0000","last_clean_scrub_stamp":"2026-03-10T14:51:54.898851+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:36:25.736444+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.0010901909999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793746+0000","last_change":"2026-03-10T14:51:56.937408+0000","last_active":"2026-03-10T14:53:02.793746+0000","last_peered":"2026-03-10T14:53:02.793746+0000","last_clean":"2026-03-10T14:53:02.793746+0000","last_became_active":"2026-03-10T14:51:56.937324+0000","last_became_peered":"2026-03-10T14:51:56.937324+0000","last_unstale":"2026-03-10T14:53:02.793746+0000","last_undegraded":"2026-03-10T14:53:02.793746+0000","last_fullsized":"2026-03-10T14:53:02.793746+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:16:18.375445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214023+0000","last_change":"2026-03-10T14:51:58.942257+0000","last_active":"2026-03-10T14:52:24.214023+0000","last_peered":"2026-03-10T14:52:24.214023+0000","last_clean":"2026-03-10T14:52:24.214023+0000","last_became_active":"2026-03-10T14:51:58.942107+0000","last_became_peered":"2026-03-10T14:51:58.942107+0000","last_unstale":"2026-03-10T14:52:24.214023+0000","last_undegraded":"2026-03-10T14:52:24.214023+0000","last_fullsized":"2026-03-10T14:52:24.214023+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:10.008438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"65'19","reported_seq":65,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329159+0000","last_change":"2026-03-10T14:51:54.926713+0000","last_active":"2026-03-10T14:52:24.329159+0000","last_peered":"2026-03-10T14:52:24.329159+0000","last_clean":"2026-03-10T14:52:24.329159+0000","last_became_active":"2026-03-10T14:51:54.926410+0000","last_became_peered":"2026-03-10T14:51:54.926410+0000","last_unstale":"2026-03-10T14:52:24.329159+0000","last_undegraded":"2026-03-10T14:52:24.329159+0000","last_fullsized":"2026-03-10T14:52:24.329159+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:59.827455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211250+0000","last_change":"2026-03-10T14:51:52.729818+0000","last_active":"2026-03-10T14:52:24.211250+0000","last_peered":"2026-03-10T14:52:24.211250+0000","last_clean":"2026-03-10T14:52:24.211250+0000","last_became_active":"2026-03-10T14:51:52.728865+0000","last_became_peered":"2026-03-10T14:51:52.728865+0000","last_unstale":"2026-03-10T14:52:24.211250+0000","last_undegraded":"2026-03-10T14:52:24.211250+0000","last_fullsized":"2026-03-10T14:52:24.211250+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:53:19.462601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213819+0000","last_change":"2026-03-10T14:51:56.937065+0000","last_active":"2026-03-10T14:52:24.213819+0000","last_peered":"2026-03-10T14:52:24.213819+0000","last_clean":"2026-03-10T14:52:24.213819+0000","last_became_active":"2026-03-10T14:51:56.936957+0000","last_became_peered":"2026-03-10T14:51:56.936957+0000","last_unstale":"2026-03-10T14:52:24.213819+0000","last_undegraded":"2026-03-10T14:52:24.213819+0000","last_fullsized":"2026-03-10T14:52:24.213819+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:37.739743+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"65'1","reported_seq":22,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337054+0000","last_change":"2026-03-10T14:51:58.946891+0000","last_active":"2026-03-10T14:52:24.337054+0000","last_peered":"2026-03-10T14:52:24.337054+0000","last_clean":"2026-03-10T14:52:24.337054+0000","last_became_active":"2026-03-10T14:51:58.946674+0000","last_became_peered":"2026-03-10T14:51:58.946674+0000","last_unstale":"2026-03-10T14:52:24.337054+0000","last_undegraded":"2026-03-10T14:52:24.337054+0000","last_fullsized":"2026-03-10T14:52:24.337054+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:58:56.849208+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"65'18","reported_seq":61,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212245+0000","last_change":"2026-03-10T14:51:54.924261+0000","last_active":"2026-03-10T14:52:24.212245+0000","last_peered":"2026-03-10T14:52:24.212245+0000","last_clean":"2026-03-10T14:52:24.212245+0000","last_became_active":"2026-03-10T14:51:54.924158+0000","last_became_peered":"2026-03-10T14:51:54.924158+0000","last_unstale":"2026-03-10T14:52:24.212245+0000","last_undegraded":"2026-03-10T14:52:24.212245+0000","last_fullsized":"2026-03-10T14:52:24.212245+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:14:52.917121+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193999+0000","last_change":"2026-03-10T14:51:52.734331+0000","last_active":"2026-03-10T14:52:25.193999+0000","last_peered":"2026-03-10T14:52:25.193999+0000","last_clean":"2026-03-10T14:52:25.193999+0000","last_became_active":"2026-03-10T14:51:52.733969+0000","last_became_peered":"2026-03-10T14:51:52.733969+0000","last_unstale":"2026-03-10T14:52:25.193999+0000","last_undegraded":"2026-03-10T14:52:25.193999+0000","last_fullsized":"2026-03-10T14:52:25.193999+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:53:58.982626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194216+0000","last_change":"2026-03-10T14:51:56.941101+0000","last_active":"2026-03-10T14:52:25.194216+0000","last_peered":"2026-03-10T14:52:25.194216+0000","last_clean":"2026-03-10T14:52:25.194216+0000","last_became_active":"2026-03-10T14:51:56.940938+0000","last_became_peered":"2026-03-10T14:51:56.940938+0000","last_unstale":"2026-03-10T14:52:25.194216+0000","last_undegraded":"2026-03-10T14:52:25.194216+0000","last_fullsized":"2026-03-10T14:52:25.194216+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:05:35.985365+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264981+0000","last_change":"2026-03-10T14:51:59.508442+0000","last_active":"2026-03-10T14:52:25.264981+0000","last_peered":"2026-03-10T14:52:25.264981+0000","last_clean":"2026-03-10T14:52:25.264981+0000","last_became_active":"2026-03-10T14:51:59.508307+0000","last_became_peered":"2026-03-10T14:51:59.508307+0000","last_unstale":"2026-03-10T14:52:25.264981+0000","last_undegraded":"2026-03-10T14:52:25.264981+0000","last_fullsized":"2026-03-10T14:52:25.264981+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:30:19.067728+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"65'14","reported_seq":50,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214510+0000","last_change":"2026-03-10T14:51:54.931494+0000","last_active":"2026-03-10T14:52:24.214510+0000","last_peered":"2026-03-10T14:52:24.214510+0000","last_clean":"2026-03-10T14:52:24.214510+0000","last_became_active":"2026-03-10T14:51:54.931402+0000","last_became_peered":"2026-03-10T14:51:54.931402+0000","last_unstale":"2026-03-10T14:52:24.214510+0000","last_undegraded":"2026-03-10T14:52:24.214510+0000","last_fullsized":"2026-03-10T14:52:24.214510+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:58:36.522315+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263142+0000","last_change":"2026-03-10T14:51:52.751250+0000","last_active":"2026-03-10T14:52:25.263142+0000","last_peered":"2026-03-10T14:52:25.263142+0000","last_clean":"2026-03-10T14:52:25.263142+0000","last_became_active":"2026-03-10T14:51:52.750061+0000","last_became_peered":"2026-03-10T14:51:52.750061+0000","last_unstale":"2026-03-10T14:52:25.263142+0000","last_undegraded":"2026-03-10T14:52:25.263142+0000","last_fullsized":"2026-03-10T14:52:25.263142+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:29:24.434390+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211401+0000","last_change":"2026-03-10T14:51:56.941452+0000","last_active":"2026-03-10T14:52:24.211401+0000","last_peered":"2026-03-10T14:52:24.211401+0000","last_clean":"2026-03-10T14:52:24.211401+0000","last_became_active":"2026-03-10T14:51:56.939000+0000","last_became_peered":"2026-03-10T14:51:56.939000+0000","last_unstale":"2026-03-10T14:52:24.211401+0000","last_undegraded":"2026-03-10T14:52:24.211401+0000","last_fullsized":"2026-03-10T14:52:24.211401+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:06:07.919881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212043+0000","last_change":"2026-03-10T14:51:58.952090+0000","last_active":"2026-03-10T14:52:24.212043+0000","last_peered":"2026-03-10T14:52:24.212043+0000","last_clean":"2026-03-10T14:52:24.212043+0000","last_became_active":"2026-03-10T14:51:58.951968+0000","last_became_peered":"2026-03-10T14:51:58.951968+0000","last_unstale":"2026-03-10T14:52:24.212043+0000","last_undegraded":"2026-03-10T14:52:24.212043+0000","last_fullsized":"2026-03-10T14:52:24.212043+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:33:36.358829+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337801+0000","last_change":"2026-03-10T14:51:54.936148+0000","last_active":"2026-03-10T14:52:24.337801+0000","last_peered":"2026-03-10T14:52:24.337801+0000","last_clean":"2026-03-10T14:52:24.337801+0000","last_became_active":"2026-03-10T14:51:54.935998+0000","last_became_peered":"2026-03-10T14:51:54.935998+0000","last_unstale":"2026-03-10T14:52:24.337801+0000","last_undegraded":"2026-03-10T14:52:24.337801+0000","last_fullsized":"2026-03-10T14:52:24.337801+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:52:40.667935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210651+0000","last_change":"2026-03-10T14:51:52.743019+0000","last_active":"2026-03-10T14:52:24.210651+0000","last_peered":"2026-03-10T14:52:24.210651+0000","last_clean":"2026-03-10T14:52:24.210651+0000","last_became_active":"2026-03-10T14:51:52.742637+0000","last_became_peered":"2026-03-10T14:51:52.742637+0000","last_unstale":"2026-03-10T14:52:24.210651+0000","last_undegraded":"2026-03-10T14:52:24.210651+0000","last_fullsized":"2026-03-10T14:52:24.210651+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:32:49.414867+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"69'39","reported_seq":68,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:26.497710+0000","last_change":"2026-03-10T14:51:33.247484+0000","last_active":"2026-03-10T14:52:26.497710+0000","last_peered":"2026-03-10T14:52:26.497710+0000","last_clean":"2026-03-10T14:52:26.497710+0000","last_became_active":"2026-03-10T14:51:33.241993+0000","last_became_peered":"2026-03-10T14:51:33.241993+0000","last_unstale":"2026-03-10T14:52:26.497710+0000","last_undegraded":"2026-03-10T14:52:26.497710+0000","last_fullsized":"2026-03-10T14:52:26.497710+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:48:39.387534+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:48:39.387534+0000","last_clean_scrub_stamp":"2026-03-10T14:48:39.387534+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:24.045522+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264286+0000","last_change":"2026-03-10T14:51:56.951571+0000","last_active":"2026-03-10T14:52:25.264286+0000","last_peered":"2026-03-10T14:52:25.264286+0000","last_clean":"2026-03-10T14:52:25.264286+0000","last_became_active":"2026-03-10T14:51:56.951407+0000","last_became_peered":"2026-03-10T14:51:56.951407+0000","last_unstale":"2026-03-10T14:52:25.264286+0000","last_undegraded":"2026-03-10T14:52:25.264286+0000","last_fullsized":"2026-03-10T14:52:25.264286+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:58:57.435129+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210597+0000","last_change":"2026-03-10T14:51:58.952595+0000","last_active":"2026-03-10T14:52:24.210597+0000","last_peered":"2026-03-10T14:52:24.210597+0000","last_clean":"2026-03-10T14:52:24.210597+0000","last_became_active":"2026-03-10T14:51:58.952392+0000","last_became_peered":"2026-03-10T14:51:58.952392+0000","last_unstale":"2026-03-10T14:52:24.210597+0000","last_undegraded":"2026-03-10T14:52:24.210597+0000","last_fullsized":"2026-03-10T14:52:24.210597+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:11:06.409397+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"65'17","reported_seq":57,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263587+0000","last_change":"2026-03-10T14:51:54.931004+0000","last_active":"2026-03-10T14:52:25.263587+0000","last_peered":"2026-03-10T14:52:25.263587+0000","last_clean":"2026-03-10T14:52:25.263587+0000","last_became_active":"2026-03-10T14:51:54.930756+0000","last_became_peered":"2026-03-10T14:51:54.930756+0000","last_unstale":"2026-03-10T14:52:25.263587+0000","last_undegraded":"2026-03-10T14:52:25.263587+0000","last_fullsized":"2026-03-10T14:52:25.263587+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:47:37.656055+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193880+0000","last_change":"2026-03-10T14:51:52.732626+0000","last_active":"2026-03-10T14:52:25.193880+0000","last_peered":"2026-03-10T14:52:25.193880+0000","last_clean":"2026-03-10T14:52:25.193880+0000","last_became_active":"2026-03-10T14:51:52.732457+0000","last_became_peered":"2026-03-10T14:51:52.732457+0000","last_unstale":"2026-03-10T14:52:25.193880+0000","last_undegraded":"2026-03-10T14:52:25.193880+0000","last_fullsized":"2026-03-10T14:52:25.193880+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:24:10.879181+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194365+0000","last_change":"2026-03-10T14:51:56.922668+0000","last_active":"2026-03-10T14:52:25.194365+0000","last_peered":"2026-03-10T14:52:25.194365+0000","last_clean":"2026-03-10T14:52:25.194365+0000","last_became_active":"2026-03-10T14:51:56.922427+0000","last_became_peered":"2026-03-10T14:51:56.922427+0000","last_unstale":"2026-03-10T14:52:25.194365+0000","last_undegraded":"2026-03-10T14:52:25.194365+0000","last_fullsized":"2026-03-10T14:52:25.194365+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:08.480376+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263628+0000","last_change":"2026-03-10T14:51:58.942958+0000","last_active":"2026-03-10T14:52:25.263628+0000","last_peered":"2026-03-10T14:52:25.263628+0000","last_clean":"2026-03-10T14:52:25.263628+0000","last_became_active":"2026-03-10T14:51:58.942873+0000","last_became_peered":"2026-03-10T14:51:58.942873+0000","last_unstale":"2026-03-10T14:52:25.263628+0000","last_undegraded":"2026-03-10T14:52:25.263628+0000","last_fullsized":"2026-03-10T14:52:25.263628+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:43:34.921933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211277+0000","last_change":"2026-03-10T14:51:54.930155+0000","last_active":"2026-03-10T14:52:24.211277+0000","last_peered":"2026-03-10T14:52:24.211277+0000","last_clean":"2026-03-10T14:52:24.211277+0000","last_became_active":"2026-03-10T14:51:54.925390+0000","last_became_peered":"2026-03-10T14:51:54.925390+0000","last_unstale":"2026-03-10T14:52:24.211277+0000","last_undegraded":"2026-03-10T14:52:24.211277+0000","last_fullsized":"2026-03-10T14:52:24.211277+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:31:57.174372+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212200+0000","last_change":"2026-03-10T14:51:52.727337+0000","last_active":"2026-03-10T14:52:24.212200+0000","last_peered":"2026-03-10T14:52:24.212200+0000","last_clean":"2026-03-10T14:52:24.212200+0000","last_became_active":"2026-03-10T14:51:52.726653+0000","last_became_peered":"2026-03-10T14:51:52.726653+0000","last_unstale":"2026-03-10T14:52:24.212200+0000","last_undegraded":"2026-03-10T14:52:24.212200+0000","last_fullsized":"2026-03-10T14:52:24.212200+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:18:28.992971+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194616+0000","last_change":"2026-03-10T14:51:56.935038+0000","last_active":"2026-03-10T14:52:25.194616+0000","last_peered":"2026-03-10T14:52:25.194616+0000","last_clean":"2026-03-10T14:52:25.194616+0000","last_became_active":"2026-03-10T14:51:56.934934+0000","last_became_peered":"2026-03-10T14:51:56.934934+0000","last_unstale":"2026-03-10T14:52:25.194616+0000","last_undegraded":"2026-03-10T14:52:25.194616+0000","last_fullsized":"2026-03-10T14:52:25.194616+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:36:17.238564+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213556+0000","last_change":"2026-03-10T14:51:58.945305+0000","last_active":"2026-03-10T14:52:24.213556+0000","last_peered":"2026-03-10T14:52:24.213556+0000","last_clean":"2026-03-10T14:52:24.213556+0000","last_became_active":"2026-03-10T14:51:58.945190+0000","last_became_peered":"2026-03-10T14:51:58.945190+0000","last_unstale":"2026-03-10T14:52:24.213556+0000","last_undegraded":"2026-03-10T14:52:24.213556+0000","last_fullsized":"2026-03-10T14:52:24.213556+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:58:43.875107+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263789+0000","last_change":"2026-03-10T14:51:54.927635+0000","last_active":"2026-03-10T14:52:25.263789+0000","last_peered":"2026-03-10T14:52:25.263789+0000","last_clean":"2026-03-10T14:52:25.263789+0000","last_became_active":"2026-03-10T14:51:54.927541+0000","last_became_peered":"2026-03-10T14:51:54.927541+0000","last_unstale":"2026-03-10T14:52:25.263789+0000","last_undegraded":"2026-03-10T14:52:25.263789+0000","last_fullsized":"2026-03-10T14:52:25.263789+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:49.822249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194036+0000","last_change":"2026-03-10T14:51:52.744208+0000","last_active":"2026-03-10T14:52:25.194036+0000","last_peered":"2026-03-10T14:52:25.194036+0000","last_clean":"2026-03-10T14:52:25.194036+0000","last_became_active":"2026-03-10T14:51:52.744054+0000","last_became_peered":"2026-03-10T14:51:52.744054+0000","last_unstale":"2026-03-10T14:52:25.194036+0000","last_undegraded":"2026-03-10T14:52:25.194036+0000","last_fullsized":"2026-03-10T14:52:25.194036+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:09:38.862290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"65'11","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791713+0000","last_change":"2026-03-10T14:51:56.948808+0000","last_active":"2026-03-10T14:53:02.791713+0000","last_peered":"2026-03-10T14:53:02.791713+0000","last_clean":"2026-03-10T14:53:02.791713+0000","last_became_active":"2026-03-10T14:51:56.948627+0000","last_became_peered":"2026-03-10T14:51:56.948627+0000","last_unstale":"2026-03-10T14:53:02.791713+0000","last_undegraded":"2026-03-10T14:53:02.791713+0000","last_fullsized":"2026-03-10T14:53:02.791713+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:54:05.780560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210172+0000","last_change":"2026-03-10T14:51:59.510444+0000","last_active":"2026-03-10T14:52:24.210172+0000","last_peered":"2026-03-10T14:52:24.210172+0000","last_clean":"2026-03-10T14:52:24.210172+0000","last_became_active":"2026-03-10T14:51:59.510288+0000","last_became_peered":"2026-03-10T14:51:59.510288+0000","last_unstale":"2026-03-10T14:52:24.210172+0000","last_undegraded":"2026-03-10T14:52:24.210172+0000","last_fullsized":"2026-03-10T14:52:24.210172+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:28:56.469567+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264457+0000","last_change":"2026-03-10T14:51:54.938929+0000","last_active":"2026-03-10T14:52:25.264457+0000","last_peered":"2026-03-10T14:52:25.264457+0000","last_clean":"2026-03-10T14:52:25.264457+0000","last_became_active":"2026-03-10T14:51:54.927755+0000","last_became_peered":"2026-03-10T14:51:54.927755+0000","last_unstale":"2026-03-10T14:52:25.264457+0000","last_undegraded":"2026-03-10T14:52:25.264457+0000","last_fullsized":"2026-03-10T14:52:25.264457+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:19:34.531530+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"58'2","reported_seq":49,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329494+0000","last_change":"2026-03-10T14:51:52.746829+0000","last_active":"2026-03-10T14:52:24.329494+0000","last_peered":"2026-03-10T14:52:24.329494+0000","last_clean":"2026-03-10T14:52:24.329494+0000","last_became_active":"2026-03-10T14:51:52.746683+0000","last_became_peered":"2026-03-10T14:51:52.746683+0000","last_unstale":"2026-03-10T14:52:24.329494+0000","last_undegraded":"2026-03-10T14:52:24.329494+0000","last_fullsized":"2026-03-10T14:52:24.329494+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:44:22.908143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194611+0000","last_change":"2026-03-10T14:51:56.927644+0000","last_active":"2026-03-10T14:52:25.194611+0000","last_peered":"2026-03-10T14:52:25.194611+0000","last_clean":"2026-03-10T14:52:25.194611+0000","last_became_active":"2026-03-10T14:51:56.927552+0000","last_became_peered":"2026-03-10T14:51:56.927552+0000","last_unstale":"2026-03-10T14:52:25.194611+0000","last_undegraded":"2026-03-10T14:52:25.194611+0000","last_fullsized":"2026-03-10T14:52:25.194611+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:11:10.187287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337013+0000","last_change":"2026-03-10T14:51:58.946829+0000","last_active":"2026-03-10T14:52:24.337013+0000","last_peered":"2026-03-10T14:52:24.337013+0000","last_clean":"2026-03-10T14:52:24.337013+0000","last_became_active":"2026-03-10T14:51:58.946526+0000","last_became_peered":"2026-03-10T14:51:58.946526+0000","last_unstale":"2026-03-10T14:52:24.337013+0000","last_undegraded":"2026-03-10T14:52:24.337013+0000","last_fullsized":"2026-03-10T14:52:24.337013+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:53:06.581325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263534+0000","last_change":"2026-03-10T14:51:54.928240+0000","last_active":"2026-03-10T14:52:25.263534+0000","last_peered":"2026-03-10T14:52:25.263534+0000","last_clean":"2026-03-10T14:52:25.263534+0000","last_became_active":"2026-03-10T14:51:54.928008+0000","last_became_peered":"2026-03-10T14:51:54.928008+0000","last_unstale":"2026-03-10T14:52:25.263534+0000","last_undegraded":"2026-03-10T14:52:25.263534+0000","last_fullsized":"2026-03-10T14:52:25.263534+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:42:43.013471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.193955+0000","last_change":"2026-03-10T14:51:52.733695+0000","last_active":"2026-03-10T14:52:25.193955+0000","last_peered":"2026-03-10T14:52:25.193955+0000","last_clean":"2026-03-10T14:52:25.193955+0000","last_became_active":"2026-03-10T14:51:52.732756+0000","last_became_peered":"2026-03-10T14:51:52.732756+0000","last_unstale":"2026-03-10T14:52:25.193955+0000","last_undegraded":"2026-03-10T14:52:25.193955+0000","last_fullsized":"2026-03-10T14:52:25.193955+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:20:33.261227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337143+0000","last_change":"2026-03-10T14:51:56.940374+0000","last_active":"2026-03-10T14:52:24.337143+0000","last_peered":"2026-03-10T14:52:24.337143+0000","last_clean":"2026-03-10T14:52:24.337143+0000","last_became_active":"2026-03-10T14:51:56.939967+0000","last_became_peered":"2026-03-10T14:51:56.939967+0000","last_unstale":"2026-03-10T14:52:24.337143+0000","last_undegraded":"2026-03-10T14:52:24.337143+0000","last_fullsized":"2026-03-10T14:52:24.337143+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:18.799940+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.194272+0000","last_change":"2026-03-10T14:51:58.948933+0000","last_active":"2026-03-10T14:52:25.194272+0000","last_peered":"2026-03-10T14:52:25.194272+0000","last_clean":"2026-03-10T14:52:25.194272+0000","last_became_active":"2026-03-10T14:51:58.948783+0000","last_became_peered":"2026-03-10T14:51:58.948783+0000","last_unstale":"2026-03-10T14:52:25.194272+0000","last_undegraded":"2026-03-10T14:52:25.194272+0000","last_fullsized":"2026-03-10T14:52:25.194272+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:27:17.812468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"65'4","reported_seq":35,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243256+0000","last_change":"2026-03-10T14:51:54.932295+0000","last_active":"2026-03-10T14:52:25.243256+0000","last_peered":"2026-03-10T14:52:25.243256+0000","last_clean":"2026-03-10T14:52:25.243256+0000","last_became_active":"2026-03-10T14:51:54.932181+0000","last_became_peered":"2026-03-10T14:51:54.932181+0000","last_unstale":"2026-03-10T14:52:25.243256+0000","last_undegraded":"2026-03-10T14:52:25.243256+0000","last_fullsized":"2026-03-10T14:52:25.243256+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:28:40.352238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242355+0000","last_change":"2026-03-10T14:51:52.728606+0000","last_active":"2026-03-10T14:52:25.242355+0000","last_peered":"2026-03-10T14:52:25.242355+0000","last_clean":"2026-03-10T14:52:25.242355+0000","last_became_active":"2026-03-10T14:51:52.728420+0000","last_became_peered":"2026-03-10T14:51:52.728420+0000","last_unstale":"2026-03-10T14:52:25.242355+0000","last_undegraded":"2026-03-10T14:52:25.242355+0000","last_fullsized":"2026-03-10T14:52:25.242355+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:01:50.018046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211507+0000","last_change":"2026-03-10T14:51:56.939190+0000","last_active":"2026-03-10T14:52:24.211507+0000","last_peered":"2026-03-10T14:52:24.211507+0000","last_clean":"2026-03-10T14:52:24.211507+0000","last_became_active":"2026-03-10T14:51:56.938188+0000","last_became_peered":"2026-03-10T14:51:56.938188+0000","last_unstale":"2026-03-10T14:52:24.211507+0000","last_undegraded":"2026-03-10T14:52:24.211507+0000","last_fullsized":"2026-03-10T14:52:24.211507+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:13:33.020929+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.265070+0000","last_change":"2026-03-10T14:51:59.508610+0000","last_active":"2026-03-10T14:52:25.265070+0000","last_peered":"2026-03-10T14:52:25.265070+0000","last_clean":"2026-03-10T14:52:25.265070+0000","last_became_active":"2026-03-10T14:51:59.508350+0000","last_became_peered":"2026-03-10T14:51:59.508350+0000","last_unstale":"2026-03-10T14:52:25.265070+0000","last_undegraded":"2026-03-10T14:52:25.265070+0000","last_fullsized":"2026-03-10T14:52:25.265070+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:17:04.062333+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"65'11","reported_seq":48,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263890+0000","last_change":"2026-03-10T14:51:54.927522+0000","last_active":"2026-03-10T14:52:25.263890+0000","last_peered":"2026-03-10T14:52:25.263890+0000","last_clean":"2026-03-10T14:52:25.263890+0000","last_became_active":"2026-03-10T14:51:54.927436+0000","last_became_peered":"2026-03-10T14:51:54.927436+0000","last_unstale":"2026-03-10T14:52:25.263890+0000","last_undegraded":"2026-03-10T14:52:25.263890+0000","last_fullsized":"2026-03-10T14:52:25.263890+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:07:05.164262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211030+0000","last_change":"2026-03-10T14:51:52.742604+0000","last_active":"2026-03-10T14:52:24.211030+0000","last_peered":"2026-03-10T14:52:24.211030+0000","last_clean":"2026-03-10T14:52:24.211030+0000","last_became_active":"2026-03-10T14:51:52.742506+0000","last_became_peered":"2026-03-10T14:51:52.742506+0000","last_unstale":"2026-03-10T14:52:24.211030+0000","last_undegraded":"2026-03-10T14:52:24.211030+0000","last_fullsized":"2026-03-10T14:52:24.211030+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:01:08.458113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791203+0000","last_change":"2026-03-10T14:51:56.939508+0000","last_active":"2026-03-10T14:53:02.791203+0000","last_peered":"2026-03-10T14:53:02.791203+0000","last_clean":"2026-03-10T14:53:02.791203+0000","last_became_active":"2026-03-10T14:51:56.939063+0000","last_became_peered":"2026-03-10T14:51:56.939063+0000","last_unstale":"2026-03-10T14:53:02.791203+0000","last_undegraded":"2026-03-10T14:53:02.791203+0000","last_fullsized":"2026-03-10T14:53:02.791203+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:25:20.989255+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213743+0000","last_change":"2026-03-10T14:51:58.945353+0000","last_active":"2026-03-10T14:52:24.213743+0000","last_peered":"2026-03-10T14:52:24.213743+0000","last_clean":"2026-03-10T14:52:24.213743+0000","last_became_active":"2026-03-10T14:51:58.945214+0000","last_became_peered":"2026-03-10T14:51:58.945214+0000","last_unstale":"2026-03-10T14:52:24.213743+0000","last_undegraded":"2026-03-10T14:52:24.213743+0000","last_fullsized":"2026-03-10T14:52:24.213743+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:55:30.387498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213939+0000","last_change":"2026-03-10T14:51:54.922190+0000","last_active":"2026-03-10T14:52:24.213939+0000","last_peered":"2026-03-10T14:52:24.213939+0000","last_clean":"2026-03-10T14:52:24.213939+0000","last_became_active":"2026-03-10T14:51:54.918400+0000","last_became_peered":"2026-03-10T14:51:54.918400+0000","last_unstale":"2026-03-10T14:52:24.213939+0000","last_undegraded":"2026-03-10T14:52:24.213939+0000","last_fullsized":"2026-03-10T14:52:24.213939+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:43:28.642723+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213899+0000","last_change":"2026-03-10T14:51:52.724303+0000","last_active":"2026-03-10T14:52:24.213899+0000","last_peered":"2026-03-10T14:52:24.213899+0000","last_clean":"2026-03-10T14:52:24.213899+0000","last_became_active":"2026-03-10T14:51:52.723665+0000","last_became_peered":"2026-03-10T14:51:52.723665+0000","last_unstale":"2026-03-10T14:52:24.213899+0000","last_undegraded":"2026-03-10T14:52:24.213899+0000","last_fullsized":"2026-03-10T14:52:24.213899+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:47:03.837440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"65'11","reported_seq":51,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.793746+0000","last_change":"2026-03-10T14:51:56.942666+0000","last_active":"2026-03-10T14:53:02.793746+0000","last_peered":"2026-03-10T14:53:02.793746+0000","last_clean":"2026-03-10T14:53:02.793746+0000","last_became_active":"2026-03-10T14:51:56.942577+0000","last_became_peered":"2026-03-10T14:51:56.942577+0000","last_unstale":"2026-03-10T14:53:02.793746+0000","last_undegraded":"2026-03-10T14:53:02.793746+0000","last_fullsized":"2026-03-10T14:53:02.793746+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:56:48.261412+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329265+0000","last_change":"2026-03-10T14:51:58.951959+0000","last_active":"2026-03-10T14:52:24.329265+0000","last_peered":"2026-03-10T14:52:24.329265+0000","last_clean":"2026-03-10T14:52:24.329265+0000","last_became_active":"2026-03-10T14:51:58.951793+0000","last_became_peered":"2026-03-10T14:51:58.951793+0000","last_unstale":"2026-03-10T14:52:24.329265+0000","last_undegraded":"2026-03-10T14:52:24.329265+0000","last_fullsized":"2026-03-10T14:52:24.329265+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:57:06.637753+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263470+0000","last_change":"2026-03-10T14:51:54.928200+0000","last_active":"2026-03-10T14:52:25.263470+0000","last_peered":"2026-03-10T14:52:25.263470+0000","last_clean":"2026-03-10T14:52:25.263470+0000","last_became_active":"2026-03-10T14:51:54.927884+0000","last_became_peered":"2026-03-10T14:51:54.927884+0000","last_unstale":"2026-03-10T14:52:25.263470+0000","last_undegraded":"2026-03-10T14:52:25.263470+0000","last_fullsized":"2026-03-10T14:52:25.263470+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:28:45.465114+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242425+0000","last_change":"2026-03-10T14:51:52.739040+0000","last_active":"2026-03-10T14:52:25.242425+0000","last_peered":"2026-03-10T14:52:25.242425+0000","last_clean":"2026-03-10T14:52:25.242425+0000","last_became_active":"2026-03-10T14:51:52.738943+0000","last_became_peered":"2026-03-10T14:51:52.738943+0000","last_unstale":"2026-03-10T14:52:25.242425+0000","last_undegraded":"2026-03-10T14:52:25.242425+0000","last_fullsized":"2026-03-10T14:52:25.242425+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:51:48.451308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337151+0000","last_change":"2026-03-10T14:51:56.939741+0000","last_active":"2026-03-10T14:52:24.337151+0000","last_peered":"2026-03-10T14:52:24.337151+0000","last_clean":"2026-03-10T14:52:24.337151+0000","last_became_active":"2026-03-10T14:51:56.939616+0000","last_became_peered":"2026-03-10T14:51:56.939616+0000","last_unstale":"2026-03-10T14:52:24.337151+0000","last_undegraded":"2026-03-10T14:52:24.337151+0000","last_fullsized":"2026-03-10T14:52:24.337151+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:35:09.055952+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.213610+0000","last_change":"2026-03-10T14:51:58.954126+0000","last_active":"2026-03-10T14:52:24.213610+0000","last_peered":"2026-03-10T14:52:24.213610+0000","last_clean":"2026-03-10T14:52:24.213610+0000","last_became_active":"2026-03-10T14:51:58.954028+0000","last_became_peered":"2026-03-10T14:51:58.954028+0000","last_unstale":"2026-03-10T14:52:24.213610+0000","last_undegraded":"2026-03-10T14:52:24.213610+0000","last_fullsized":"2026-03-10T14:52:24.213610+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:31.059158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"65'10","reported_seq":44,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328654+0000","last_change":"2026-03-10T14:51:54.926615+0000","last_active":"2026-03-10T14:52:24.328654+0000","last_peered":"2026-03-10T14:52:24.328654+0000","last_clean":"2026-03-10T14:52:24.328654+0000","last_became_active":"2026-03-10T14:51:54.926355+0000","last_became_peered":"2026-03-10T14:51:54.926355+0000","last_unstale":"2026-03-10T14:52:24.328654+0000","last_undegraded":"2026-03-10T14:52:24.328654+0000","last_fullsized":"2026-03-10T14:52:24.328654+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:22:51.944185+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"58'1","reported_seq":41,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212013+0000","last_change":"2026-03-10T14:51:52.720660+0000","last_active":"2026-03-10T14:52:24.212013+0000","last_peered":"2026-03-10T14:52:24.212013+0000","last_clean":"2026-03-10T14:52:24.212013+0000","last_became_active":"2026-03-10T14:51:52.720554+0000","last_became_peered":"2026-03-10T14:51:52.720554+0000","last_unstale":"2026-03-10T14:52:24.212013+0000","last_undegraded":"2026-03-10T14:52:24.212013+0000","last_fullsized":"2026-03-10T14:52:24.212013+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:00:19.554898+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212024+0000","last_change":"2026-03-10T14:51:56.938614+0000","last_active":"2026-03-10T14:52:24.212024+0000","last_peered":"2026-03-10T14:52:24.212024+0000","last_clean":"2026-03-10T14:52:24.212024+0000","last_became_active":"2026-03-10T14:51:56.937756+0000","last_became_peered":"2026-03-10T14:51:56.937756+0000","last_unstale":"2026-03-10T14:52:24.212024+0000","last_undegraded":"2026-03-10T14:52:24.212024+0000","last_fullsized":"2026-03-10T14:52:24.212024+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:11:14.188912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337083+0000","last_change":"2026-03-10T14:51:58.955142+0000","last_active":"2026-03-10T14:52:24.337083+0000","last_peered":"2026-03-10T14:52:24.337083+0000","last_clean":"2026-03-10T14:52:24.337083+0000","last_became_active":"2026-03-10T14:51:58.955049+0000","last_became_peered":"2026-03-10T14:51:58.955049+0000","last_unstale":"2026-03-10T14:52:24.337083+0000","last_undegraded":"2026-03-10T14:52:24.337083+0000","last_fullsized":"2026-03-10T14:52:24.337083+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:07:43.290853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"65'6","reported_seq":38,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214558+0000","last_change":"2026-03-10T14:51:54.926625+0000","last_active":"2026-03-10T14:52:24.214558+0000","last_peered":"2026-03-10T14:52:24.214558+0000","last_clean":"2026-03-10T14:52:24.214558+0000","last_became_active":"2026-03-10T14:51:54.926495+0000","last_became_peered":"2026-03-10T14:51:54.926495+0000","last_unstale":"2026-03-10T14:52:24.214558+0000","last_undegraded":"2026-03-10T14:52:24.214558+0000","last_fullsized":"2026-03-10T14:52:24.214558+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T02:42:10.784931+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210974+0000","last_change":"2026-03-10T14:51:52.739244+0000","last_active":"2026-03-10T14:52:24.210974+0000","last_peered":"2026-03-10T14:52:24.210974+0000","last_clean":"2026-03-10T14:52:24.210974+0000","last_became_active":"2026-03-10T14:51:52.739149+0000","last_became_peered":"2026-03-10T14:51:52.739149+0000","last_unstale":"2026-03-10T14:52:24.210974+0000","last_undegraded":"2026-03-10T14:52:24.210974+0000","last_fullsized":"2026-03-10T14:52:24.210974+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:45:26.078705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.243012+0000","last_change":"2026-03-10T14:51:56.946808+0000","last_active":"2026-03-10T14:52:25.243012+0000","last_peered":"2026-03-10T14:52:25.243012+0000","last_clean":"2026-03-10T14:52:25.243012+0000","last_became_active":"2026-03-10T14:51:56.946011+0000","last_became_peered":"2026-03-10T14:51:56.946011+0000","last_unstale":"2026-03-10T14:52:25.243012+0000","last_undegraded":"2026-03-10T14:52:25.243012+0000","last_fullsized":"2026-03-10T14:52:25.243012+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:21:06.813207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.265024+0000","last_change":"2026-03-10T14:51:58.946120+0000","last_active":"2026-03-10T14:52:25.265024+0000","last_peered":"2026-03-10T14:52:25.265024+0000","last_clean":"2026-03-10T14:52:25.265024+0000","last_became_active":"2026-03-10T14:51:58.946041+0000","last_became_peered":"2026-03-10T14:51:58.946041+0000","last_unstale":"2026-03-10T14:52:25.265024+0000","last_undegraded":"2026-03-10T14:52:25.265024+0000","last_fullsized":"2026-03-10T14:52:25.265024+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:27:38.506440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.210453+0000","last_change":"2026-03-10T14:51:54.923855+0000","last_active":"2026-03-10T14:52:24.210453+0000","last_peered":"2026-03-10T14:52:24.210453+0000","last_clean":"2026-03-10T14:52:24.210453+0000","last_became_active":"2026-03-10T14:51:54.923672+0000","last_became_peered":"2026-03-10T14:51:54.923672+0000","last_unstale":"2026-03-10T14:52:24.210453+0000","last_undegraded":"2026-03-10T14:52:24.210453+0000","last_fullsized":"2026-03-10T14:52:24.210453+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:14:18.149056+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242472+0000","last_change":"2026-03-10T14:51:52.736202+0000","last_active":"2026-03-10T14:52:25.242472+0000","last_peered":"2026-03-10T14:52:25.242472+0000","last_clean":"2026-03-10T14:52:25.242472+0000","last_became_active":"2026-03-10T14:51:52.735386+0000","last_became_peered":"2026-03-10T14:51:52.735386+0000","last_unstale":"2026-03-10T14:52:25.242472+0000","last_undegraded":"2026-03-10T14:52:25.242472+0000","last_fullsized":"2026-03-10T14:52:25.242472+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:13:54.152568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.264630+0000","last_change":"2026-03-10T14:51:56.948999+0000","last_active":"2026-03-10T14:52:25.264630+0000","last_peered":"2026-03-10T14:52:25.264630+0000","last_clean":"2026-03-10T14:52:25.264630+0000","last_became_active":"2026-03-10T14:51:56.948543+0000","last_became_peered":"2026-03-10T14:51:56.948543+0000","last_unstale":"2026-03-10T14:52:25.264630+0000","last_undegraded":"2026-03-10T14:52:25.264630+0000","last_fullsized":"2026-03-10T14:52:25.264630+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:31:07.688481+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.336966+0000","last_change":"2026-03-10T14:51:59.510070+0000","last_active":"2026-03-10T14:52:24.336966+0000","last_peered":"2026-03-10T14:52:24.336966+0000","last_clean":"2026-03-10T14:52:24.336966+0000","last_became_active":"2026-03-10T14:51:59.509938+0000","last_became_peered":"2026-03-10T14:51:59.509938+0000","last_unstale":"2026-03-10T14:52:24.336966+0000","last_undegraded":"2026-03-10T14:52:24.336966+0000","last_fullsized":"2026-03-10T14:52:24.336966+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:16:38.162429+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"65'1","reported_seq":23,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.263985+0000","last_change":"2026-03-10T14:51:58.950284+0000","last_active":"2026-03-10T14:52:25.263985+0000","last_peered":"2026-03-10T14:52:25.263985+0000","last_clean":"2026-03-10T14:52:25.263985+0000","last_became_active":"2026-03-10T14:51:58.950124+0000","last_became_peered":"2026-03-10T14:51:58.950124+0000","last_unstale":"2026-03-10T14:52:25.263985+0000","last_undegraded":"2026-03-10T14:52:25.263985+0000","last_fullsized":"2026-03-10T14:52:25.263985+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:19:15.871427+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"65'15","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212430+0000","last_change":"2026-03-10T14:51:54.929255+0000","last_active":"2026-03-10T14:52:24.212430+0000","last_peered":"2026-03-10T14:52:24.212430+0000","last_clean":"2026-03-10T14:52:24.212430+0000","last_became_active":"2026-03-10T14:51:54.929131+0000","last_became_peered":"2026-03-10T14:51:54.929131+0000","last_unstale":"2026-03-10T14:52:24.212430+0000","last_undegraded":"2026-03-10T14:52:24.212430+0000","last_fullsized":"2026-03-10T14:52:24.212430+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:52:32.147475+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.211221+0000","last_change":"2026-03-10T14:51:52.742783+0000","last_active":"2026-03-10T14:52:24.211221+0000","last_peered":"2026-03-10T14:52:24.211221+0000","last_clean":"2026-03-10T14:52:24.211221+0000","last_became_active":"2026-03-10T14:51:52.742661+0000","last_became_peered":"2026-03-10T14:51:52.742661+0000","last_unstale":"2026-03-10T14:52:24.211221+0000","last_undegraded":"2026-03-10T14:52:24.211221+0000","last_fullsized":"2026-03-10T14:52:24.211221+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:54:01.940073+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"65'11","reported_seq":54,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:53:02.791550+0000","last_change":"2026-03-10T14:51:56.946142+0000","last_active":"2026-03-10T14:53:02.791550+0000","last_peered":"2026-03-10T14:53:02.791550+0000","last_clean":"2026-03-10T14:53:02.791550+0000","last_became_active":"2026-03-10T14:51:56.945851+0000","last_became_peered":"2026-03-10T14:51:56.945851+0000","last_unstale":"2026-03-10T14:53:02.791550+0000","last_undegraded":"2026-03-10T14:53:02.791550+0000","last_fullsized":"2026-03-10T14:53:02.791550+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:58:08.033080+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212502+0000","last_change":"2026-03-10T14:51:58.955036+0000","last_active":"2026-03-10T14:52:24.212502+0000","last_peered":"2026-03-10T14:52:24.212502+0000","last_clean":"2026-03-10T14:52:24.212502+0000","last_became_active":"2026-03-10T14:51:58.954950+0000","last_became_peered":"2026-03-10T14:51:58.954950+0000","last_unstale":"2026-03-10T14:52:24.212502+0000","last_undegraded":"2026-03-10T14:52:24.212502+0000","last_fullsized":"2026-03-10T14:52:24.212502+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:27.561203+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337332+0000","last_change":"2026-03-10T14:51:54.919597+0000","last_active":"2026-03-10T14:52:24.337332+0000","last_peered":"2026-03-10T14:52:24.337332+0000","last_clean":"2026-03-10T14:52:24.337332+0000","last_became_active":"2026-03-10T14:51:54.919323+0000","last_became_peered":"2026-03-10T14:51:54.919323+0000","last_unstale":"2026-03-10T14:52:24.337332+0000","last_undegraded":"2026-03-10T14:52:24.337332+0000","last_fullsized":"2026-03-10T14:52:24.337332+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:32:31.794997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"58'1","reported_seq":34,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337294+0000","last_change":"2026-03-10T14:51:52.731338+0000","last_active":"2026-03-10T14:52:24.337294+0000","last_peered":"2026-03-10T14:52:24.337294+0000","last_clean":"2026-03-10T14:52:24.337294+0000","last_became_active":"2026-03-10T14:51:52.731245+0000","last_became_peered":"2026-03-10T14:51:52.731245+0000","last_unstale":"2026-03-10T14:52:24.337294+0000","last_undegraded":"2026-03-10T14:52:24.337294+0000","last_fullsized":"2026-03-10T14:52:24.337294+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:49:16.856140+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214215+0000","last_change":"2026-03-10T14:51:56.938136+0000","last_active":"2026-03-10T14:52:24.214215+0000","last_peered":"2026-03-10T14:52:24.214215+0000","last_clean":"2026-03-10T14:52:24.214215+0000","last_became_active":"2026-03-10T14:51:56.938019+0000","last_became_peered":"2026-03-10T14:51:56.938019+0000","last_unstale":"2026-03-10T14:52:24.214215+0000","last_undegraded":"2026-03-10T14:52:24.214215+0000","last_fullsized":"2026-03-10T14:52:24.214215+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:58:46.309226+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.328805+0000","last_change":"2026-03-10T14:51:59.509798+0000","last_active":"2026-03-10T14:52:24.328805+0000","last_peered":"2026-03-10T14:52:24.328805+0000","last_clean":"2026-03-10T14:52:24.328805+0000","last_became_active":"2026-03-10T14:51:59.509192+0000","last_became_peered":"2026-03-10T14:51:59.509192+0000","last_unstale":"2026-03-10T14:52:24.328805+0000","last_undegraded":"2026-03-10T14:52:24.328805+0000","last_fullsized":"2026-03-10T14:52:24.328805+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:21:23.147338+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"58'1","reported_seq":41,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:25.242510+0000","last_change":"2026-03-10T14:51:52.742063+0000","last_active":"2026-03-10T14:52:25.242510+0000","last_peered":"2026-03-10T14:52:25.242510+0000","last_clean":"2026-03-10T14:52:25.242510+0000","last_became_active":"2026-03-10T14:51:52.741949+0000","last_became_peered":"2026-03-10T14:51:52.741949+0000","last_unstale":"2026-03-10T14:52:25.242510+0000","last_undegraded":"2026-03-10T14:52:25.242510+0000","last_fullsized":"2026-03-10T14:52:25.242510+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:09:42.932393+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"65'5","reported_seq":39,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.214604+0000","last_change":"2026-03-10T14:51:54.931242+0000","last_active":"2026-03-10T14:52:24.214604+0000","last_peered":"2026-03-10T14:52:24.214604+0000","last_clean":"2026-03-10T14:52:24.214604+0000","last_became_active":"2026-03-10T14:51:54.931165+0000","last_became_peered":"2026-03-10T14:51:54.931165+0000","last_unstale":"2026-03-10T14:52:24.214604+0000","last_undegraded":"2026-03-10T14:52:24.214604+0000","last_fullsized":"2026-03-10T14:52:24.214604+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:51:25.893354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.212782+0000","last_change":"2026-03-10T14:51:56.949876+0000","last_active":"2026-03-10T14:52:24.212782+0000","last_peered":"2026-03-10T14:52:24.212782+0000","last_clean":"2026-03-10T14:52:24.212782+0000","last_became_active":"2026-03-10T14:51:56.949674+0000","last_became_peered":"2026-03-10T14:51:56.949674+0000","last_unstale":"2026-03-10T14:52:24.212782+0000","last_undegraded":"2026-03-10T14:52:24.212782+0000","last_fullsized":"2026-03-10T14:52:24.212782+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:25:29.989509+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337639+0000","last_change":"2026-03-10T14:51:59.513207+0000","last_active":"2026-03-10T14:52:24.337639+0000","last_peered":"2026-03-10T14:52:24.337639+0000","last_clean":"2026-03-10T14:52:24.337639+0000","last_became_active":"2026-03-10T14:51:59.513111+0000","last_became_peered":"2026-03-10T14:51:59.513111+0000","last_unstale":"2026-03-10T14:52:24.337639+0000","last_undegraded":"2026-03-10T14:52:24.337639+0000","last_fullsized":"2026-03-10T14:52:24.337639+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":63,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:57.913312+0000","last_clean_scrub_stamp":"2026-03-10T14:51:57.913312+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:16:20.305949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.337682+0000","last_change":"2026-03-10T14:51:52.748824+0000","last_active":"2026-03-10T14:52:24.337682+0000","last_peered":"2026-03-10T14:52:24.337682+0000","last_clean":"2026-03-10T14:52:24.337682+0000","last_became_active":"2026-03-10T14:51:52.748433+0000","last_became_peered":"2026-03-10T14:51:52.748433+0000","last_unstale":"2026-03-10T14:52:24.337682+0000","last_undegraded":"2026-03-10T14:52:24.337682+0000","last_fullsized":"2026-03-10T14:52:24.337682+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:51.696445+0000","last_clean_scrub_stamp":"2026-03-10T14:51:51.696445+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:23:21.911199+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"65'9","reported_seq":45,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329587+0000","last_change":"2026-03-10T14:51:54.926723+0000","last_active":"2026-03-10T14:52:24.329587+0000","last_peered":"2026-03-10T14:52:24.329587+0000","last_clean":"2026-03-10T14:52:24.329587+0000","last_became_active":"2026-03-10T14:51:54.926582+0000","last_became_peered":"2026-03-10T14:51:54.926582+0000","last_unstale":"2026-03-10T14:52:24.329587+0000","last_undegraded":"2026-03-10T14:52:24.329587+0000","last_fullsized":"2026-03-10T14:52:24.329587+0000","mapping_epoch":59,"log_start":"0'0","ondisk_log_start":"0'0","created":59,"last_epoch_clean":60,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:53.708867+0000","last_clean_scrub_stamp":"2026-03-10T14:51:53.708867+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:25:02.502706+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":69,"state":"active+clean","last_fresh":"2026-03-10T14:52:24.329551+0000","last_change":"2026-03-10T14:51:56.942920+0000","last_active":"2026-03-10T14:52:24.329551+0000","last_peered":"2026-03-10T14:52:24.329551+0000","last_clean":"2026-03-10T14:52:24.329551+0000","last_became_active":"2026-03-10T14:51:56.942057+0000","last_became_peered":"2026-03-10T14:51:56.942057+0000","last_unstale":"2026-03-10T14:52:24.329551+0000","last_undegraded":"2026-03-10T14:52:24.329551+0000","last_fullsized":"2026-03-10T14:52:24.329551+0000","mapping_epoch":61,"log_start":"0'0","ondisk_log_start":"0'0","created":61,"last_epoch_clean":62,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:51:55.902610+0000","last_clean_scrub_stamp":"2026-03-10T14:51:55.902610+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:28:45.685672+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":67,"num_read_kb":62,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":54,"seq":231928234005,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27984,"kb_used_data":1148,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939440,"statfs":{"total":21470642176,"available":21441986560,"internally_reserved":0,"allocated":1175552,"data_stored":724699,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":47,"seq":201863462942,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27948,"kb_used_data":1116,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939476,"statfs":{"total":21470642176,"available":21442023424,"internally_reserved":0,"allocated":1142784,"data_stored":723344,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":39,"seq":167503724579,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27508,"kb_used_data":668,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939916,"statfs":{"total":21470642176,"available":21442473984,"internally_reserved":0,"allocated":684032,"data_stored":265023,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953514,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27548,"kb_used_data":708,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939876,"statfs":{"total":21470642176,"available":21442433024,"internally_reserved":0,"allocated":724992,"data_stored":265022,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149745,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":664,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":679936,"data_stored":264028,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411384,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":664,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":679936,"data_stored":264121,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574911,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27520,"kb_used_data":680,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939904,"statfs":{"total":21470642176,"available":21442461696,"internally_reserved":0,"allocated":696320,"data_stored":264952,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27964,"kb_used_data":1132,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939460,"statfs":{"total":21470642176,"available":21442007040,"internally_reserved":0,"allocated":1159168,"data_stored":724556,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T14:53:16.163 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T14:53:16.163 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T14:53:16.163 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T14:53:16.163 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph health --format=json 2026-03-10T14:53:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:17 vm03 bash[23394]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:17 vm03 bash[23394]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:17 vm03 bash[23394]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:17 vm03 bash[23394]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:17 vm00 bash[28403]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:17 vm00 bash[28403]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:17 vm00 bash[28403]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:17 vm00 bash[28403]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:17 vm00 bash[20726]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:17 vm00 bash[20726]: audit 2026-03-10T14:53:16.074278+0000 mgr.y (mgr.24425) 66 : audit [DBG] from='client.24551 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:53:17.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:17 vm00 bash[20726]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:17.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:17 vm00 bash[20726]: cluster 2026-03-10T14:53:16.239698+0000 mgr.y (mgr.24425) 67 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:18.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:53:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:19 vm03 bash[23394]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:19 vm03 bash[23394]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:19 vm03 bash[23394]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:19 vm03 bash[23394]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:19 vm00 bash[28403]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:19 vm00 bash[28403]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:19 vm00 bash[28403]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:19 vm00 bash[28403]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:19.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:19 vm00 bash[20726]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:19 vm00 bash[20726]: cluster 2026-03-10T14:53:18.240010+0000 mgr.y (mgr.24425) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:19.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:19 vm00 bash[20726]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:19.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:19 vm00 bash[20726]: audit 2026-03-10T14:53:18.528811+0000 mgr.y (mgr.24425) 69 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:20.841 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:53:21.130 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:53:21.130 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T14:53:21.185 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T14:53:21.185 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T14:53:21.185 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T14:53:21.189 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T14:53:21.189 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T14:53:21.189 DEBUG:teuthology.orchestra.run.vm00:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T14:53:21.193 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:53:21.193 INFO:teuthology.orchestra.run.vm00.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T14:53:21.193 DEBUG:teuthology.orchestra.run.vm00:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T14:53:21.239 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T14:53:21.239 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T14:53:21.282 INFO:tasks.workunit:timeout=1h 2026-03-10T14:53:21.282 INFO:tasks.workunit:cleanup=True 2026-03-10T14:53:21.282 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T14:53:21.328 INFO:tasks.workunit.client.0.vm00.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:21 vm00 bash[28403]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:21 vm00 bash[28403]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:21 vm00 bash[28403]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:21 vm00 bash[28403]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:21 vm00 bash[20726]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:21 vm00 bash[20726]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:21 vm00 bash[20726]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:21.855 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:21 vm00 bash[20726]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:21 vm03 bash[23394]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:21 vm03 bash[23394]: cluster 2026-03-10T14:53:20.240309+0000 mgr.y (mgr.24425) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:21 vm03 bash[23394]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:21 vm03 bash[23394]: audit 2026-03-10T14:53:21.132201+0000 mon.c (mon.2) 24 : audit [DBG] from='client.? 192.168.123.100:0/3819805427' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:53:23.770 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:23 vm00 bash[20726]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:23.770 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:23 vm00 bash[20726]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:23.770 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:23.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:23 vm00 bash[28403]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:23.770 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:23 vm00 bash[28403]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:23 vm03 bash[23394]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:23 vm03 bash[23394]: cluster 2026-03-10T14:53:22.240720+0000 mgr.y (mgr.24425) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:24 vm03 bash[23394]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.898 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:24 vm03 bash[23394]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.898 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:24 vm03 bash[23394]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:24.898 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:24 vm03 bash[23394]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:24 vm00 bash[20726]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:24 vm00 bash[20726]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:24 vm00 bash[20726]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:24 vm00 bash[20726]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:24 vm00 bash[28403]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:24 vm00 bash[28403]: audit 2026-03-10T14:53:24.250211+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:24 vm00 bash[28403]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:24.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:24 vm00 bash[28403]: audit 2026-03-10T14:53:24.334511+0000 mon.a (mon.0) 857 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:25 vm03 bash[23394]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:25 vm00 bash[20726]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: cluster 2026-03-10T14:53:24.241032+0000 mgr.y (mgr.24425) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: audit 2026-03-10T14:53:24.527921+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.8", "id": [7, 2]}]': finished 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:25.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:25 vm00 bash[28403]: cluster 2026-03-10T14:53:24.532993+0000 mon.a (mon.0) 859 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T14:53:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:26 vm03 bash[23394]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:26 vm03 bash[23394]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:26.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:26 vm00 bash[20726]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:26 vm00 bash[20726]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:26.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:26 vm00 bash[28403]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:26.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:26 vm00 bash[28403]: cluster 2026-03-10T14:53:25.533622+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T14:53:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:27 vm03 bash[23394]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:27 vm03 bash[23394]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:27 vm03 bash[23394]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:27 vm03 bash[23394]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:27.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:27 vm00 bash[28403]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:27 vm00 bash[28403]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:27 vm00 bash[28403]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:27 vm00 bash[28403]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:27 vm00 bash[20726]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:27 vm00 bash[20726]: cluster 2026-03-10T14:53:26.241385+0000 mgr.y (mgr.24425) 73 : cluster [DBG] pgmap v36: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:27 vm00 bash[20726]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:27.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:27 vm00 bash[20726]: cluster 2026-03-10T14:53:26.547480+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T14:53:28.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:53:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:29 vm03 bash[23394]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:29 vm03 bash[23394]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:29 vm03 bash[23394]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:29 vm03 bash[23394]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:29 vm00 bash[20726]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:29 vm00 bash[20726]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:29 vm00 bash[20726]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:29 vm00 bash[20726]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:29 vm00 bash[28403]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:29 vm00 bash[28403]: cluster 2026-03-10T14:53:28.241753+0000 mgr.y (mgr.24425) 74 : cluster [DBG] pgmap v37: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:29 vm00 bash[28403]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:29.966 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:29 vm00 bash[28403]: audit 2026-03-10T14:53:28.529815+0000 mgr.y (mgr.24425) 75 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:32.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:32 vm00 bash[20726]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:32 vm00 bash[20726]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:32.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:32 vm00 bash[28403]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:32.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:32 vm00 bash[28403]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:32.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:32 vm03 bash[23394]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:32.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:32 vm03 bash[23394]: cluster 2026-03-10T14:53:30.242151+0000 mgr.y (mgr.24425) 76 : cluster [DBG] pgmap v38: 132 pgs: 1 peering, 131 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:33 vm00 bash[20726]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:33 vm00 bash[20726]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:33 vm00 bash[28403]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:33 vm00 bash[28403]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:33 vm03 bash[23394]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:33.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:33 vm03 bash[23394]: cluster 2026-03-10T14:53:32.242591+0000 mgr.y (mgr.24425) 77 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:34.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:34 vm00 bash[20726]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:34 vm00 bash[20726]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:34 vm00 bash[20726]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:34.183 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:34 vm00 bash[20726]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:34.183 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:34 vm00 bash[28403]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:34 vm00 bash[28403]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:34 vm00 bash[28403]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:34.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:34 vm00 bash[28403]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:34 vm03 bash[23394]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:34 vm03 bash[23394]: cluster 2026-03-10T14:53:33.129459+0000 mon.a (mon.0) 862 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T14:53:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:34 vm03 bash[23394]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:34.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:34 vm03 bash[23394]: cluster 2026-03-10T14:53:33.129478+0000 mon.a (mon.0) 863 : cluster [INF] Cluster is now healthy 2026-03-10T14:53:35.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:35 vm00 bash[20726]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:35.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:35 vm00 bash[20726]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:35.466 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:35 vm00 bash[28403]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:35.467 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:35 vm00 bash[28403]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:35.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:35 vm03 bash[23394]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:35.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:35 vm03 bash[23394]: cluster 2026-03-10T14:53:34.243012+0000 mgr.y (mgr.24425) 78 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:53:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:37 vm03 bash[23394]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:37.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:37 vm03 bash[23394]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:37 vm00 bash[20726]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:37.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:37 vm00 bash[20726]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:37.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:37 vm00 bash[28403]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:37.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:37 vm00 bash[28403]: cluster 2026-03-10T14:53:36.243546+0000 mgr.y (mgr.24425) 79 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 956 B/s rd, 0 op/s 2026-03-10T14:53:38.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:53:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:39 vm00 bash[28403]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:39 vm00 bash[20726]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: cluster 2026-03-10T14:53:38.243878+0000 mgr.y (mgr.24425) 80 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: audit 2026-03-10T14:53:38.536653+0000 mgr.y (mgr.24425) 81 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:39 vm03 bash[23394]: audit 2026-03-10T14:53:39.340589+0000 mon.a (mon.0) 864 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:41 vm03 bash[23394]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:41 vm03 bash[23394]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:41 vm00 bash[28403]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:41 vm00 bash[28403]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:41 vm00 bash[20726]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:41.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:41 vm00 bash[20726]: cluster 2026-03-10T14:53:40.244226+0000 mgr.y (mgr.24425) 82 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:43.875 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 14:53:43 vm03 bash[50670]: logger=infra.usagestats t=2026-03-10T14:53:43.523357962Z level=info msg="Usage stats are ready to report" 2026-03-10T14:53:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:43 vm03 bash[23394]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:43 vm03 bash[23394]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:43 vm00 bash[28403]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:43 vm00 bash[28403]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:43.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:43 vm00 bash[20726]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:43 vm00 bash[20726]: cluster 2026-03-10T14:53:42.244620+0000 mgr.y (mgr.24425) 83 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:45 vm03 bash[23394]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:45 vm03 bash[23394]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:45 vm00 bash[28403]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:45 vm00 bash[28403]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:45 vm00 bash[20726]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:45 vm00 bash[20726]: cluster 2026-03-10T14:53:44.244922+0000 mgr.y (mgr.24425) 84 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:47 vm03 bash[23394]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:47 vm03 bash[23394]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:47 vm00 bash[28403]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:47 vm00 bash[28403]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:47 vm00 bash[20726]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:47 vm00 bash[20726]: cluster 2026-03-10T14:53:46.245409+0000 mgr.y (mgr.24425) 85 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:48.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:53:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:49 vm03 bash[23394]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:49 vm03 bash[23394]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:49 vm03 bash[23394]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:49 vm03 bash[23394]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:49 vm00 bash[28403]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:49 vm00 bash[28403]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:49 vm00 bash[28403]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:49 vm00 bash[28403]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:49 vm00 bash[20726]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:49 vm00 bash[20726]: cluster 2026-03-10T14:53:48.245718+0000 mgr.y (mgr.24425) 86 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:49 vm00 bash[20726]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:49 vm00 bash[20726]: audit 2026-03-10T14:53:48.547596+0000 mgr.y (mgr.24425) 87 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:53:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:51 vm03 bash[23394]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:51 vm03 bash[23394]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:51.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:51 vm00 bash[28403]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:51.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:51 vm00 bash[28403]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:51.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:51 vm00 bash[20726]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:51.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:51 vm00 bash[20726]: cluster 2026-03-10T14:53:50.246068+0000 mgr.y (mgr.24425) 88 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:53 vm03 bash[23394]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:53 vm03 bash[23394]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:53.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:53 vm00 bash[28403]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:53.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:53 vm00 bash[28403]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:53.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:53:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:53:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:53:53.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:53 vm00 bash[20726]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:53.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:53 vm00 bash[20726]: cluster 2026-03-10T14:53:52.246554+0000 mgr.y (mgr.24425) 89 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:54 vm03 bash[23394]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:54 vm03 bash[23394]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:54.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:54 vm00 bash[28403]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:54.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:54 vm00 bash[28403]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:54.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:54 vm00 bash[20726]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:54.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:54 vm00 bash[20726]: audit 2026-03-10T14:53:54.359963+0000 mon.a (mon.0) 865 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:53:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:55 vm03 bash[23394]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:55 vm03 bash[23394]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:55.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:55 vm00 bash[28403]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:55.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:55 vm00 bash[28403]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:55.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:55 vm00 bash[20726]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:55.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:55 vm00 bash[20726]: cluster 2026-03-10T14:53:54.246892+0000 mgr.y (mgr.24425) 90 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:53:57.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:57 vm00 bash[28403]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:57.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:57 vm00 bash[28403]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:57 vm00 bash[20726]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:57 vm00 bash[20726]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:57 vm03 bash[23394]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:57 vm03 bash[23394]: cluster 2026-03-10T14:53:56.247407+0000 mgr.y (mgr.24425) 91 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:53:58.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:53:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:59 vm03 bash[23394]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:59 vm03 bash[23394]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:59 vm03 bash[23394]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:53:59 vm03 bash[23394]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:59 vm00 bash[28403]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:59 vm00 bash[28403]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:59 vm00 bash[28403]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:53:59 vm00 bash[28403]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:59 vm00 bash[20726]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:59 vm00 bash[20726]: cluster 2026-03-10T14:53:58.247721+0000 mgr.y (mgr.24425) 92 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:59 vm00 bash[20726]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:00.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:53:59 vm00 bash[20726]: audit 2026-03-10T14:53:58.558365+0000 mgr.y (mgr.24425) 93 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:00 vm03 bash[23394]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:01.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:00 vm03 bash[23394]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:01.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:00 vm00 bash[28403]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:01.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:00 vm00 bash[28403]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:01.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:00 vm00 bash[20726]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:01.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:00 vm00 bash[20726]: audit 2026-03-10T14:53:59.756984+0000 mon.a (mon.0) 866 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:54:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:01 vm03 bash[23394]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:02.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:01 vm03 bash[23394]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:02.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:01 vm00 bash[28403]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:02.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:01 vm00 bash[28403]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:02.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:01 vm00 bash[20726]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:02.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:01 vm00 bash[20726]: cluster 2026-03-10T14:54:00.248145+0000 mgr.y (mgr.24425) 94 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:02 vm03 bash[23394]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:02 vm03 bash[23394]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:02 vm00 bash[20726]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:02 vm00 bash[20726]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:03.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:02 vm00 bash[28403]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:03.216 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:02 vm00 bash[28403]: cluster 2026-03-10T14:54:02.248587+0000 mgr.y (mgr.24425) 95 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:04.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:05.459 INFO:tasks.workunit.client.0.vm00.stderr:Updating files: 97% (13542/13941) Updating files: 98% (13663/13941) Updating files: 99% (13802/13941) Updating files: 100% (13941/13941) Updating files: 100% (13941/13941), done. 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:05 vm00 bash[20726]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:05 vm00 bash[28403]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: cluster 2026-03-10T14:54:04.248858+0000 mgr.y (mgr.24425) 96 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.247808+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.307747+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.388379+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:05 vm03 bash[23394]: audit 2026-03-10T14:54:05.395579+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T14:54:06.367 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: git switch -c 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr:Or undo this operation with: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: git switch - 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-10T14:54:06.368 INFO:tasks.workunit.client.0.vm00.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T14:54:06.375 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T14:54:06.420 INFO:tasks.workunit.client.0.vm00.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T14:54:06.422 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T14:54:06.422 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T14:54:06.482 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T14:54:06.516 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T14:54:06.547 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T14:54:06.548 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T14:54:06.548 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T14:54:06.572 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T14:54:06.575 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:54:06.576 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T14:54:06.623 INFO:tasks.workunit:Running workunits matching rados/test_python.sh on client.0... 2026-03-10T14:54:06.624 INFO:tasks.workunit:Running workunit rados/test_python.sh... 2026-03-10T14:54:06.624 DEBUG:teuthology.orchestra.run.vm00:workunit test rados/test_python.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-10T14:54:06.669 INFO:tasks.workunit.client.0.vm00.stderr:+ ceph osd pool create rbd 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:06 vm00 bash[20726]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.716 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:06 vm00 bash[28403]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:05.865983+0000 mon.a (mon.0) 871 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:05.866651+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:06.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:06 vm03 bash[23394]: audit 2026-03-10T14:54:06.088423+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:07.511 INFO:tasks.workunit.client.0.vm00.stderr:pool 'rbd' already exists 2026-03-10T14:54:07.533 INFO:tasks.workunit.client.0.vm00.stderr:+ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-10T14:54:07.534 INFO:tasks.workunit.client.0.vm00.stderr:+ python3 -m pytest -v /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/../../../src/test/pybind/test_rados.py 2026-03-10T14:54:07.616 INFO:tasks.workunit.client.0.vm00.stdout:============================= test session starts ============================== 2026-03-10T14:54:07.617 INFO:tasks.workunit.client.0.vm00.stdout:platform linux -- Python 3.10.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.0 -- /usr/bin/python3 2026-03-10T14:54:07.617 INFO:tasks.workunit.client.0.vm00.stdout:cachedir: .pytest_cache 2026-03-10T14:54:07.617 INFO:tasks.workunit.client.0.vm00.stdout:rootdir: /home/ubuntu/cephtest/clone.client.0/src/test/pybind, configfile: pytest.ini 2026-03-10T14:54:07.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:07 vm00 bash[28403]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:07 vm00 bash[28403]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:07 vm00 bash[28403]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:07 vm00 bash[28403]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:07 vm00 bash[20726]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:07 vm00 bash[20726]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:07 vm00 bash[20726]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:07 vm00 bash[20726]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.774 INFO:tasks.workunit.client.0.vm00.stdout:collecting ... collected 91 items 2026-03-10T14:54:07.774 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:54:07.780 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init_error PASSED [ 1%] 2026-03-10T14:54:07.814 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init PASSED [ 2%] 2026-03-10T14:54:07.826 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_ioctx_context_manager PASSED [ 3%] 2026-03-10T14:54:07.831 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv PASSED [ 4%] 2026-03-10T14:54:07.834 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv_empty_str PASSED [ 5%] 2026-03-10T14:54:07.838 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_configuring PASSED [ 6%] 2026-03-10T14:54:07.848 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_connected PASSED [ 7%] 2026-03-10T14:54:07.856 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_shutdown PASSED [ 8%] 2026-03-10T14:54:07.871 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_ping_monitor PASSED [ 9%] 2026-03-10T14:54:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:07 vm03 bash[23394]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:07 vm03 bash[23394]: cluster 2026-03-10T14:54:06.249432+0000 mgr.y (mgr.24425) 97 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:07 vm03 bash[23394]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:07 vm03 bash[23394]: audit 2026-03-10T14:54:06.856573+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:07.882 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_annotations PASSED [ 10%] 2026-03-10T14:54:08.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:08 vm03 bash[23394]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:08 vm00 bash[28403]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.452826+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: cluster 2026-03-10T14:54:07.460048+0000 mon.a (mon.0) 876 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.513806+0000 mon.a (mon.0) 877 : audit [INF] from='client.? 192.168.123.100:0/1161305512' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:08 vm00 bash[20726]: audit 2026-03-10T14:54:07.869487+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.100:0/1674788147' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:54:09.471 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create PASSED [ 12%] 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:09.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:09 vm03 bash[23394]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:09 vm00 bash[28403]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.249742+0000 mgr.y (mgr.24425) 98 : cluster [DBG] pgmap v58: 164 pgs: 32 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.454376+0000 mon.a (mon.0) 878 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:08.470948+0000 mon.a (mon.0) 879 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:08.569231+0000 mgr.y (mgr.24425) 99 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:09.371880+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: audit 2026-03-10T14:54:09.372805+0000 mon.a (mon.0) 881 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:09 vm00 bash[20726]: cluster 2026-03-10T14:54:09.472366+0000 mon.a (mon.0) 882 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T14:54:11.497 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create_utf8 PASSED [ 13%] 2026-03-10T14:54:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:11 vm03 bash[23394]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:11 vm03 bash[23394]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:11 vm03 bash[23394]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:11.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:11 vm03 bash[23394]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:11 vm00 bash[28403]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:11 vm00 bash[28403]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:11 vm00 bash[28403]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:11 vm00 bash[28403]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:11 vm00 bash[20726]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:11 vm00 bash[20726]: cluster 2026-03-10T14:54:10.250121+0000 mgr.y (mgr.24425) 100 : cluster [DBG] pgmap v61: 164 pgs: 15 unknown, 149 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:11 vm00 bash[20726]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:11 vm00 bash[20726]: cluster 2026-03-10T14:54:10.481440+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T14:54:12.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:12 vm03 bash[23394]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:12.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:12 vm03 bash[23394]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:12.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:12 vm00 bash[28403]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:12.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:12 vm00 bash[28403]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:12.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:12 vm00 bash[20726]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:12.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:12 vm00 bash[20726]: cluster 2026-03-10T14:54:11.492051+0000 mon.a (mon.0) 884 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T14:54:13.524 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_pool_lookup_utf8 PASSED [ 14%] 2026-03-10T14:54:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:13 vm03 bash[23394]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:13 vm03 bash[23394]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:13 vm03 bash[23394]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:13 vm03 bash[23394]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:13 vm00 bash[28403]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:13 vm00 bash[28403]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:13 vm00 bash[28403]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:13 vm00 bash[28403]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:13 vm00 bash[20726]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:13 vm00 bash[20726]: cluster 2026-03-10T14:54:12.250535+0000 mgr.y (mgr.24425) 101 : cluster [DBG] pgmap v64: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:13 vm00 bash[20726]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:13 vm00 bash[20726]: cluster 2026-03-10T14:54:12.539022+0000 mon.a (mon.0) 885 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T14:54:13.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:14.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:14 vm03 bash[23394]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:14.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:14 vm03 bash[23394]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:14.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:14 vm00 bash[28403]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:14.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:14 vm00 bash[28403]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:14.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:14 vm00 bash[20726]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:14.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:14 vm00 bash[20726]: cluster 2026-03-10T14:54:13.525343+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T14:54:15.574 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_eexist PASSED [ 15%] 2026-03-10T14:54:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:15 vm03 bash[23394]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:15 vm03 bash[23394]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:15 vm03 bash[23394]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:15.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:15 vm03 bash[23394]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:15 vm00 bash[28403]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:15 vm00 bash[28403]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:15 vm00 bash[28403]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:15 vm00 bash[28403]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:15 vm00 bash[20726]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:15 vm00 bash[20726]: cluster 2026-03-10T14:54:14.250912+0000 mgr.y (mgr.24425) 102 : cluster [DBG] pgmap v67: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:15 vm00 bash[20726]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:15.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:15 vm00 bash[20726]: cluster 2026-03-10T14:54:14.570423+0000 mon.a (mon.0) 887 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T14:54:16.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:16 vm03 bash[23394]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:16.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:16 vm03 bash[23394]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:16.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:16 vm00 bash[28403]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:16.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:16 vm00 bash[28403]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:16.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:16 vm00 bash[20726]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:16.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:16 vm00 bash[20726]: cluster 2026-03-10T14:54:15.570091+0000 mon.a (mon.0) 888 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T14:54:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:17 vm03 bash[23394]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:17 vm03 bash[23394]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:17 vm03 bash[23394]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:17.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:17 vm03 bash[23394]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:17 vm00 bash[28403]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:17 vm00 bash[28403]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:17 vm00 bash[28403]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:17 vm00 bash[28403]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:17 vm00 bash[20726]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:17 vm00 bash[20726]: cluster 2026-03-10T14:54:16.251221+0000 mgr.y (mgr.24425) 103 : cluster [DBG] pgmap v70: 164 pgs: 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:17 vm00 bash[20726]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:17.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:17 vm00 bash[20726]: cluster 2026-03-10T14:54:16.580997+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T14:54:18.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:18 vm03 bash[23394]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:18.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:18 vm03 bash[23394]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:18.876 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:18.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:18 vm00 bash[28403]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:18.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:18 vm00 bash[28403]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:18.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:18 vm00 bash[20726]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:18.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:18 vm00 bash[20726]: cluster 2026-03-10T14:54:17.621720+0000 mon.a (mon.0) 890 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:19 vm03 bash[23394]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:19 vm00 bash[28403]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.251644+0000 mgr.y (mgr.24425) 104 : cluster [DBG] pgmap v73: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: audit 2026-03-10T14:54:18.580093+0000 mgr.y (mgr.24425) 105 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.589269+0000 mon.a (mon.0) 891 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:18.600286+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:19 vm00 bash[20726]: cluster 2026-03-10T14:54:19.614947+0000 mon.a (mon.0) 893 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:21 vm00 bash[28403]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:21 vm00 bash[28403]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:21 vm00 bash[28403]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:21 vm00 bash[28403]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:21 vm00 bash[20726]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:21 vm00 bash[20726]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:21 vm00 bash[20726]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:21 vm00 bash[20726]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:21 vm03 bash[23394]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:21 vm03 bash[23394]: cluster 2026-03-10T14:54:20.252066+0000 mgr.y (mgr.24425) 106 : cluster [DBG] pgmap v76: 228 pgs: 22 unknown, 206 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:21 vm03 bash[23394]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:21 vm03 bash[23394]: cluster 2026-03-10T14:54:20.599703+0000 mon.a (mon.0) 894 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T14:54:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:22 vm00 bash[28403]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:22 vm00 bash[28403]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:22 vm00 bash[20726]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:22 vm00 bash[20726]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:22 vm03 bash[23394]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:22 vm03 bash[23394]: cluster 2026-03-10T14:54:21.641200+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T14:54:23.717 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_list_pools PASSED [ 16%] 2026-03-10T14:54:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:23 vm03 bash[23394]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:23 vm03 bash[23394]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:23 vm03 bash[23394]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:23 vm03 bash[23394]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:23 vm00 bash[28403]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:23 vm00 bash[28403]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:23 vm00 bash[28403]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:23 vm00 bash[28403]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:23 vm00 bash[20726]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:23 vm00 bash[20726]: cluster 2026-03-10T14:54:22.252330+0000 mgr.y (mgr.24425) 107 : cluster [DBG] pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:23 vm00 bash[20726]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:24.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:23 vm00 bash[20726]: cluster 2026-03-10T14:54:22.677287+0000 mon.a (mon.0) 896 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T14:54:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:24 vm03 bash[23394]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:24 vm03 bash[23394]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:24 vm03 bash[23394]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:24 vm03 bash[23394]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:24 vm00 bash[28403]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:24 vm00 bash[28403]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:24 vm00 bash[28403]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:24 vm00 bash[28403]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:24 vm00 bash[20726]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:24 vm00 bash[20726]: cluster 2026-03-10T14:54:23.706693+0000 mon.a (mon.0) 897 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:24 vm00 bash[20726]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:25.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:24 vm00 bash[20726]: audit 2026-03-10T14:54:24.378197+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:25 vm03 bash[23394]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:25 vm03 bash[23394]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:25 vm03 bash[23394]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:25 vm03 bash[23394]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:25 vm00 bash[28403]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:25 vm00 bash[28403]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:25 vm00 bash[28403]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:25 vm00 bash[28403]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:25 vm00 bash[20726]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:25 vm00 bash[20726]: cluster 2026-03-10T14:54:24.252625+0000 mgr.y (mgr.24425) 108 : cluster [DBG] pgmap v82: 164 pgs: 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:25 vm00 bash[20726]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:26.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:25 vm00 bash[20726]: cluster 2026-03-10T14:54:24.759951+0000 mon.a (mon.0) 899 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:26 vm03 bash[23394]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:26 vm00 bash[28403]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: cluster 2026-03-10T14:54:25.745420+0000 mon.a (mon.0) 900 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: audit 2026-03-10T14:54:25.768317+0000 mon.c (mon.2) 26 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:27.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:26 vm00 bash[20726]: audit 2026-03-10T14:54:25.788308+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:27 vm03 bash[23394]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:27 vm00 bash[28403]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.253070+0000 mgr.y (mgr.24425) 109 : cluster [DBG] pgmap v85: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:28.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.779368+0000 mon.a (mon.0) 902 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.832382+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:26.837407+0000 mon.a (mon.0) 904 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.841903+0000 mon.c (mon.2) 27 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:26.851792+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.834919+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: cluster 2026-03-10T14:54:27.842915+0000 mon.a (mon.0) 907 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.844076+0000 mon.c (mon.2) 28 : audit [INF] from='client.? 192.168.123.100:0/903592881' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:27 vm00 bash[20726]: audit 2026-03-10T14:54:27.845007+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-10T14:54:28.872 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:29.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:28 vm03 bash[23394]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:28 vm00 bash[28403]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: cluster 2026-03-10T14:54:28.253448+0000 mgr.y (mgr.24425) 110 : cluster [DBG] pgmap v88: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 218 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: audit 2026-03-10T14:54:28.590857+0000 mgr.y (mgr.24425) 111 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: audit 2026-03-10T14:54:28.854313+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:29.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:28 vm00 bash[20726]: cluster 2026-03-10T14:54:28.864392+0000 mon.a (mon.0) 910 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T14:54:30.872 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_pool_base_tier PASSED [ 17%] 2026-03-10T14:54:30.893 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_fsid PASSED [ 18%] 2026-03-10T14:54:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:30 vm03 bash[23394]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:30 vm03 bash[23394]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:30 vm03 bash[23394]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:30 vm03 bash[23394]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:30 vm00 bash[28403]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:30 vm00 bash[28403]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:30 vm00 bash[28403]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:30 vm00 bash[28403]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:30 vm00 bash[20726]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:30 vm00 bash[20726]: cluster 2026-03-10T14:54:29.860178+0000 mon.a (mon.0) 911 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:30 vm00 bash[20726]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:30 vm00 bash[20726]: cluster 2026-03-10T14:54:30.254000+0000 mgr.y (mgr.24425) 112 : cluster [DBG] pgmap v91: 196 pgs: 10 creating+peering, 186 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:31.882 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_blocklist_add PASSED [ 19%] 2026-03-10T14:54:31.903 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_cluster_stats PASSED [ 20%] 2026-03-10T14:54:31.915 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_monitor_log PASSED [ 21%] 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:31 vm00 bash[20726]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:31 vm00 bash[28403]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: cluster 2026-03-10T14:54:30.863862+0000 mon.a (mon.0) 912 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: audit 2026-03-10T14:54:30.904068+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.100:0/2226120739' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:31 vm03 bash[23394]: audit 2026-03-10T14:54:30.907435+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:32 vm00 bash[28403]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:33.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:32 vm00 bash[20726]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: audit 2026-03-10T14:54:31.874582+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: cluster 2026-03-10T14:54:31.882850+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:32 vm03 bash[23394]: cluster 2026-03-10T14:54:32.254383+0000 mgr.y (mgr.24425) 113 : cluster [DBG] pgmap v94: 164 pgs: 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:34.100 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:33 vm00 bash[20726]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:34.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:34 vm03 bash[23394]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:34.440 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_last_version PASSED [ 23%] 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:32.892324+0000 mon.a (mon.0) 916 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:32.916621+0000 mon.a (mon.0) 917 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:32.927813+0000 mon.c (mon.2) 29 : audit [INF] from='client.? 192.168.123.100:0/1270100083' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:32.928939+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: audit 2026-03-10T14:54:33.428293+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:34.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:34 vm00 bash[28403]: cluster 2026-03-10T14:54:33.435889+0000 mon.a (mon.0) 920 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:35 vm00 bash[28403]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:35 vm00 bash[28403]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:35 vm00 bash[28403]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:35 vm00 bash[28403]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:35 vm00 bash[20726]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:35 vm00 bash[20726]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:35 vm00 bash[20726]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:35.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:35 vm00 bash[20726]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:35 vm03 bash[23394]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:35 vm03 bash[23394]: cluster 2026-03-10T14:54:34.254694+0000 mgr.y (mgr.24425) 114 : cluster [DBG] pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 227 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:35 vm03 bash[23394]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:35 vm03 bash[23394]: cluster 2026-03-10T14:54:34.441299+0000 mon.a (mon.0) 921 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:36.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:36 vm03 bash[23394]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:36 vm00 bash[28403]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: cluster 2026-03-10T14:54:35.443481+0000 mon.a (mon.0) 922 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:35.454728+0000 mon.c (mon.2) 30 : audit [INF] from='client.? 192.168.123.100:0/2772404922' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:35.463947+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: audit 2026-03-10T14:54:36.436338+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:36.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:36 vm00 bash[20726]: cluster 2026-03-10T14:54:36.447652+0000 mon.a (mon.0) 925 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T14:54:37.451 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_stats PASSED [ 24%] 2026-03-10T14:54:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:37 vm03 bash[23394]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:37 vm03 bash[23394]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:37 vm03 bash[23394]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:37 vm03 bash[23394]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:37 vm00 bash[28403]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:37 vm00 bash[28403]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:37 vm00 bash[28403]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:37 vm00 bash[28403]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:37 vm00 bash[20726]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:37 vm00 bash[20726]: cluster 2026-03-10T14:54:36.255071+0000 mgr.y (mgr.24425) 115 : cluster [DBG] pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:37 vm00 bash[20726]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:37.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:37 vm00 bash[20726]: cluster 2026-03-10T14:54:37.448192+0000 mon.a (mon.0) 926 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T14:54:38.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:38 vm03 bash[23394]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:38 vm03 bash[23394]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:38 vm03 bash[23394]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:38 vm03 bash[23394]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:38 vm00 bash[28403]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:38 vm00 bash[28403]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:38 vm00 bash[28403]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:38 vm00 bash[28403]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:38 vm00 bash[20726]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:38 vm00 bash[20726]: cluster 2026-03-10T14:54:38.426882+0000 mon.a (mon.0) 927 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:38 vm00 bash[20726]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:38.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:38 vm00 bash[20726]: cluster 2026-03-10T14:54:38.461758+0000 mon.a (mon.0) 928 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:39 vm03 bash[23394]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:39 vm00 bash[28403]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: cluster 2026-03-10T14:54:38.255475+0000 mgr.y (mgr.24425) 116 : cluster [DBG] pgmap v103: 164 pgs: 164 active+clean; 455 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:38.596575+0000 mgr.y (mgr.24425) 117 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.383388+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: cluster 2026-03-10T14:54:39.500643+0000 mon.a (mon.0) 930 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.537659+0000 mon.c (mon.2) 31 : audit [INF] from='client.? 192.168.123.100:0/2966254175' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:39.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:39 vm00 bash[20726]: audit 2026-03-10T14:54:39.538240+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:41.498 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write PASSED [ 25%] 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:41 vm03 bash[23394]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:41 vm00 bash[28403]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: cluster 2026-03-10T14:54:40.256027+0000 mgr.y (mgr.24425) 118 : cluster [DBG] pgmap v106: 196 pgs: 4 unknown, 192 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: audit 2026-03-10T14:54:40.484985+0000 mon.a (mon.0) 932 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:41.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:41 vm00 bash[20726]: cluster 2026-03-10T14:54:40.494590+0000 mon.a (mon.0) 933 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T14:54:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:42 vm03 bash[23394]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:42 vm03 bash[23394]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:42.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:42 vm00 bash[28403]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:42.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:42 vm00 bash[28403]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:42.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:42 vm00 bash[20726]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:42.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:42 vm00 bash[20726]: cluster 2026-03-10T14:54:41.507866+0000 mon.a (mon.0) 934 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T14:54:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:43.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:43 vm03 bash[23394]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:43 vm00 bash[20726]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:42.256397+0000 mgr.y (mgr.24425) 119 : cluster [DBG] pgmap v109: 164 pgs: 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:42.531312+0000 mon.a (mon.0) 935 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:43.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:43 vm00 bash[28403]: cluster 2026-03-10T14:54:43.427516+0000 mon.a (mon.0) 936 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:44 vm00 bash[20726]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:44 vm00 bash[28403]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: cluster 2026-03-10T14:54:43.575973+0000 mon.a (mon.0) 937 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:43.596548+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.100:0/2762836065' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:43.603299+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: audit 2026-03-10T14:54:44.548630+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:45.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:44 vm03 bash[23394]: cluster 2026-03-10T14:54:44.553460+0000 mon.a (mon.0) 940 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T14:54:45.560 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_full PASSED [ 26%] 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:45 vm00 bash[28403]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:45 vm00 bash[28403]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:45 vm00 bash[28403]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:45 vm00 bash[28403]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:45 vm00 bash[20726]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:45 vm00 bash[20726]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:45 vm00 bash[20726]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:45 vm00 bash[20726]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:45 vm03 bash[23394]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:45 vm03 bash[23394]: cluster 2026-03-10T14:54:44.256746+0000 mgr.y (mgr.24425) 120 : cluster [DBG] pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 284 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:45 vm03 bash[23394]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:45 vm03 bash[23394]: cluster 2026-03-10T14:54:45.555325+0000 mon.a (mon.0) 941 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T14:54:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:47 vm03 bash[23394]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:47 vm03 bash[23394]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:47 vm03 bash[23394]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:47.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:47 vm03 bash[23394]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:47 vm00 bash[28403]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:47 vm00 bash[28403]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:47 vm00 bash[28403]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:47 vm00 bash[28403]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:47 vm00 bash[20726]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:47 vm00 bash[20726]: cluster 2026-03-10T14:54:46.257129+0000 mgr.y (mgr.24425) 121 : cluster [DBG] pgmap v115: 164 pgs: 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:47 vm00 bash[20726]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:47 vm00 bash[20726]: cluster 2026-03-10T14:54:46.600764+0000 mon.a (mon.0) 942 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T14:54:48.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:48 vm03 bash[23394]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:48 vm00 bash[20726]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: cluster 2026-03-10T14:54:47.592385+0000 mon.a (mon.0) 943 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: audit 2026-03-10T14:54:47.639120+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.100:0/200629478' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:48.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:48 vm00 bash[28403]: audit 2026-03-10T14:54:47.639580+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:49.698 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame PASSED [ 27%] 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:49 vm00 bash[28403]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:49.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:49 vm00 bash[20726]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.257499+0000 mgr.y (mgr.24425) 122 : cluster [DBG] pgmap v118: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 319 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: audit 2026-03-10T14:54:48.597364+0000 mgr.y (mgr.24425) 123 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.605764+0000 mon.a (mon.0) 945 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: audit 2026-03-10T14:54:48.618919+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:49 vm03 bash[23394]: cluster 2026-03-10T14:54:48.630692+0000 mon.a (mon.0) 947 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T14:54:51.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:50 vm03 bash[23394]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:51.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:50 vm03 bash[23394]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:51.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:50 vm00 bash[28403]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:51.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:50 vm00 bash[28403]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:51.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:50 vm00 bash[20726]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:51.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:50 vm00 bash[20726]: cluster 2026-03-10T14:54:49.697119+0000 mon.a (mon.0) 948 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T14:54:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:51 vm03 bash[23394]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:51 vm03 bash[23394]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:51 vm03 bash[23394]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:52.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:51 vm03 bash[23394]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:51 vm00 bash[20726]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:51 vm00 bash[20726]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:51 vm00 bash[20726]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:51 vm00 bash[20726]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:51 vm00 bash[28403]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:51 vm00 bash[28403]: cluster 2026-03-10T14:54:50.257991+0000 mgr.y (mgr.24425) 124 : cluster [DBG] pgmap v121: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:51 vm00 bash[28403]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:51 vm00 bash[28403]: cluster 2026-03-10T14:54:50.743138+0000 mon.a (mon.0) 949 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:52 vm03 bash[23394]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:52 vm00 bash[20726]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: cluster 2026-03-10T14:54:51.741398+0000 mon.a (mon.0) 950 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:51.780319+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.100:0/3660859028' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:51.780805+0000 mon.a (mon.0) 951 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: audit 2026-03-10T14:54:52.731046+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:52 vm00 bash[28403]: cluster 2026-03-10T14:54:52.745131+0000 mon.a (mon.0) 953 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T14:54:53.752 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_append PASSED [ 28%] 2026-03-10T14:54:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:53 vm03 bash[23394]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:53 vm03 bash[23394]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:53 vm03 bash[23394]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:54.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:53 vm03 bash[23394]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:54:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:54:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:53 vm00 bash[28403]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:53 vm00 bash[28403]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:53 vm00 bash[28403]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:53 vm00 bash[28403]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:53 vm00 bash[20726]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:53 vm00 bash[20726]: cluster 2026-03-10T14:54:52.258311+0000 mgr.y (mgr.24425) 125 : cluster [DBG] pgmap v124: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:53 vm00 bash[20726]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:54.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:53 vm00 bash[20726]: cluster 2026-03-10T14:54:53.753383+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:54 vm03 bash[23394]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:54 vm00 bash[28403]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: cluster 2026-03-10T14:54:54.258627+0000 mgr.y (mgr.24425) 126 : cluster [DBG] pgmap v127: 164 pgs: 164 active+clean; 455 KiB data, 355 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: audit 2026-03-10T14:54:54.390073+0000 mon.a (mon.0) 955 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:54 vm00 bash[20726]: cluster 2026-03-10T14:54:54.759389+0000 mon.a (mon.0) 956 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T14:54:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:55 vm03 bash[23394]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:55 vm03 bash[23394]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:55 vm03 bash[23394]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:55 vm03 bash[23394]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:55 vm00 bash[28403]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:55 vm00 bash[28403]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:55 vm00 bash[28403]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:55 vm00 bash[28403]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:55 vm00 bash[20726]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:55 vm00 bash[20726]: cluster 2026-03-10T14:54:54.777494+0000 mon.a (mon.0) 957 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:55 vm00 bash[20726]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:55 vm00 bash[20726]: cluster 2026-03-10T14:54:55.761356+0000 mon.a (mon.0) 958 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:57.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:56 vm03 bash[23394]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:56 vm00 bash[28403]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:55.807268+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.100:0/603550778' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:55.818398+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: cluster 2026-03-10T14:54:56.258944+0000 mgr.y (mgr.24425) 127 : cluster [DBG] pgmap v130: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: audit 2026-03-10T14:54:56.745546+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:57.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:56 vm00 bash[20726]: cluster 2026-03-10T14:54:56.754114+0000 mon.a (mon.0) 961 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T14:54:58.058 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_zeros PASSED [ 29%] 2026-03-10T14:54:58.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:54:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:54:59 vm00 bash[28403]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:54:59 vm00 bash[20726]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: cluster 2026-03-10T14:54:58.057788+0000 mon.a (mon.0) 962 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: cluster 2026-03-10T14:54:58.259353+0000 mgr.y (mgr.24425) 128 : cluster [DBG] pgmap v133: 164 pgs: 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:54:59.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:54:59 vm03 bash[23394]: audit 2026-03-10T14:54:58.608290+0000 mgr.y (mgr.24425) 129 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:00 vm00 bash[28403]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:00 vm00 bash[28403]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:00 vm00 bash[20726]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:00 vm00 bash[20726]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:00.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:00 vm03 bash[23394]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:00.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:00 vm03 bash[23394]: cluster 2026-03-10T14:54:59.215107+0000 mon.a (mon.0) 963 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:01 vm00 bash[28403]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:01.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:01 vm00 bash[20726]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:01.785 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.785 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: cluster 2026-03-10T14:55:00.222803+0000 mon.a (mon.0) 964 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T14:55:01.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: cluster 2026-03-10T14:55:00.259858+0000 mgr.y (mgr.24425) 130 : cluster [DBG] pgmap v136: 196 pgs: 11 creating+activating, 2 unknown, 183 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:55:01.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:01.786 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:01 vm03 bash[23394]: audit 2026-03-10T14:55:00.262484+0000 mon.a (mon.0) 965 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:02.657 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_trunc PASSED [ 30%] 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:03.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:02 vm03 bash[23394]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:02 vm00 bash[20726]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: cluster 2026-03-10T14:55:01.468661+0000 mon.a (mon.0) 966 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: audit 2026-03-10T14:55:01.478281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.100:0/3909810783' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:03.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:02 vm00 bash[28403]: cluster 2026-03-10T14:55:01.486104+0000 mon.a (mon.0) 968 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:04.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:03 vm03 bash[23394]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:03 vm00 bash[28403]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:02.260200+0000 mgr.y (mgr.24425) 131 : cluster [DBG] pgmap v138: 196 pgs: 11 creating+activating, 185 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 243 B/s wr, 1 op/s 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:02.619806+0000 mon.a (mon.0) 969 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:04.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:03 vm00 bash[20726]: cluster 2026-03-10T14:55:03.614891+0000 mon.a (mon.0) 970 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:05 vm03 bash[23394]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:05 vm00 bash[28403]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: cluster 2026-03-10T14:55:04.260615+0000 mgr.y (mgr.24425) 132 : cluster [DBG] pgmap v141: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 360 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: cluster 2026-03-10T14:55:04.621997+0000 mon.a (mon.0) 971 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:05 vm00 bash[20726]: audit 2026-03-10T14:55:04.656035+0000 mon.a (mon.0) 972 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:06.663 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext PASSED [ 31%] 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:06 vm00 bash[28403]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:06.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:06 vm00 bash[20726]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:05.627175+0000 mon.a (mon.0) 973 : audit [INF] from='client.? 192.168.123.100:0/2441274717' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: cluster 2026-03-10T14:55:05.638236+0000 mon.a (mon.0) 974 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.138566+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.507450+0000 mon.a (mon.0) 976 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.508546+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:06 vm03 bash[23394]: audit 2026-03-10T14:55:06.575550+0000 mon.a (mon.0) 978 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:07 vm00 bash[28403]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:07 vm00 bash[28403]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:07 vm00 bash[28403]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:07 vm00 bash[28403]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:07 vm00 bash[20726]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:07 vm00 bash[20726]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:07 vm00 bash[20726]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:07 vm00 bash[20726]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:07 vm03 bash[23394]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:07 vm03 bash[23394]: cluster 2026-03-10T14:55:06.261088+0000 mgr.y (mgr.24425) 133 : cluster [DBG] pgmap v144: 196 pgs: 196 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:55:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:07 vm03 bash[23394]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:07 vm03 bash[23394]: cluster 2026-03-10T14:55:06.660471+0000 mon.a (mon.0) 979 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T14:55:08.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:08 vm03 bash[23394]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:08.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:08 vm03 bash[23394]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:08 vm00 bash[20726]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:08 vm00 bash[20726]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:08 vm00 bash[28403]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:08 vm00 bash[28403]: cluster 2026-03-10T14:55:07.666718+0000 mon.a (mon.0) 980 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:09 vm00 bash[20726]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:09.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:09 vm00 bash[28403]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: cluster 2026-03-10T14:55:08.261425+0000 mgr.y (mgr.24425) 134 : cluster [DBG] pgmap v147: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:08.616344+0000 mgr.y (mgr.24425) 135 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: cluster 2026-03-10T14:55:08.686205+0000 mon.a (mon.0) 981 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:08.728174+0000 mon.a (mon.0) 982 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:09 vm03 bash[23394]: audit 2026-03-10T14:55:09.396534+0000 mon.a (mon.0) 983 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:10.678 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects_empty PASSED [ 32%] 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:10 vm00 bash[20726]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:10.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:10 vm00 bash[28403]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: audit 2026-03-10T14:55:09.670223+0000 mon.a (mon.0) 984 : audit [INF] from='client.? 192.168.123.100:0/1032514813' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: cluster 2026-03-10T14:55:09.679495+0000 mon.a (mon.0) 985 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:10 vm03 bash[23394]: cluster 2026-03-10T14:55:10.678929+0000 mon.a (mon.0) 986 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T14:55:12.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:11 vm00 bash[28403]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:12.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:11 vm00 bash[28403]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:11 vm00 bash[20726]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:11 vm00 bash[20726]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:11 vm03 bash[23394]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:11 vm03 bash[23394]: cluster 2026-03-10T14:55:10.261955+0000 mgr.y (mgr.24425) 136 : cluster [DBG] pgmap v150: 196 pgs: 2 unknown, 194 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:12 vm00 bash[28403]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:13.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:12 vm00 bash[20726]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:11.871255+0000 mon.a (mon.0) 987 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:12.262360+0000 mgr.y (mgr.24425) 137 : cluster [DBG] pgmap v153: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:12 vm03 bash[23394]: cluster 2026-03-10T14:55:12.875333+0000 mon.a (mon.0) 988 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:13 vm00 bash[28403]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:13 vm00 bash[20726]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:12.922990+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.100:0/2440805226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:12.926647+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: audit 2026-03-10T14:55:13.874558+0000 mon.a (mon.0) 990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:13 vm03 bash[23394]: cluster 2026-03-10T14:55:13.879105+0000 mon.a (mon.0) 991 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T14:55:14.951 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_read_crc PASSED [ 34%] 2026-03-10T14:55:15.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:14 vm00 bash[28403]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:15.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:14 vm00 bash[28403]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:14 vm00 bash[20726]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:14 vm00 bash[20726]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:14 vm03 bash[23394]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:14 vm03 bash[23394]: cluster 2026-03-10T14:55:14.262744+0000 mgr.y (mgr.24425) 138 : cluster [DBG] pgmap v156: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 361 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:15 vm03 bash[23394]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:16.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:15 vm03 bash[23394]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:15 vm00 bash[20726]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:15 vm00 bash[20726]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:16.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:15 vm00 bash[28403]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:16.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:15 vm00 bash[28403]: cluster 2026-03-10T14:55:14.952015+0000 mon.a (mon.0) 992 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:17 vm03 bash[23394]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:17 vm00 bash[20726]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:15.980934+0000 mon.a (mon.0) 993 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:16.263097+0000 mgr.y (mgr.24425) 139 : cluster [DBG] pgmap v159: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:17.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:17 vm00 bash[28403]: cluster 2026-03-10T14:55:16.981764+0000 mon.a (mon.0) 994 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T14:55:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:18 vm03 bash[23394]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:18 vm03 bash[23394]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:18 vm00 bash[20726]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:18 vm00 bash[20726]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:18 vm00 bash[28403]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:18 vm00 bash[28403]: audit 2026-03-10T14:55:17.051203+0000 mon.a (mon.0) 995 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:18.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:19.543 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects PASSED [ 35%] 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:19.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:19 vm03 bash[23394]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:19 vm00 bash[20726]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: audit 2026-03-10T14:55:18.067405+0000 mon.a (mon.0) 996 : audit [INF] from='client.? 192.168.123.100:0/173342337' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: cluster 2026-03-10T14:55:18.077777+0000 mon.a (mon.0) 997 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: cluster 2026-03-10T14:55:18.263393+0000 mgr.y (mgr.24425) 140 : cluster [DBG] pgmap v162: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:19.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:19 vm00 bash[28403]: audit 2026-03-10T14:55:18.627277+0000 mgr.y (mgr.24425) 141 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:20 vm03 bash[23394]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:20 vm03 bash[23394]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:20.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:20 vm00 bash[20726]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:20.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:20 vm00 bash[20726]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:20.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:20 vm00 bash[28403]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:20.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:20 vm00 bash[28403]: cluster 2026-03-10T14:55:19.540925+0000 mon.a (mon.0) 998 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:21 vm00 bash[20726]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:21.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:21 vm00 bash[28403]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.263802+0000 mgr.y (mgr.24425) 142 : cluster [DBG] pgmap v164: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.561287+0000 mon.a (mon.0) 999 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:22.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:21 vm03 bash[23394]: cluster 2026-03-10T14:55:20.590991+0000 mon.a (mon.0) 1000 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:22 vm00 bash[20726]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:22.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:22 vm00 bash[28403]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: cluster 2026-03-10T14:55:21.643691+0000 mon.a (mon.0) 1001 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: audit 2026-03-10T14:55:21.707061+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: audit 2026-03-10T14:55:22.635425+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? 192.168.123.100:0/2618341364' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:23.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:22 vm03 bash[23394]: cluster 2026-03-10T14:55:22.638864+0000 mon.a (mon.0) 1004 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T14:55:23.648 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_ns_objects PASSED [ 36%] 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:23 vm00 bash[28403]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:23 vm00 bash[28403]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:23 vm00 bash[28403]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:23 vm00 bash[28403]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:23 vm00 bash[20726]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:23 vm00 bash[20726]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:23 vm00 bash[20726]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:23.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:23 vm00 bash[20726]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:23 vm03 bash[23394]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:23 vm03 bash[23394]: cluster 2026-03-10T14:55:22.264119+0000 mgr.y (mgr.24425) 143 : cluster [DBG] pgmap v167: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:23 vm03 bash[23394]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:24.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:23 vm03 bash[23394]: cluster 2026-03-10T14:55:23.642684+0000 mon.a (mon.0) 1005 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:24 vm00 bash[28403]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:24 vm00 bash[28403]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:24 vm00 bash[28403]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:24 vm00 bash[28403]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:24 vm00 bash[20726]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:24 vm00 bash[20726]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:24 vm00 bash[20726]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:24.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:24 vm00 bash[20726]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:24 vm03 bash[23394]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:24 vm03 bash[23394]: audit 2026-03-10T14:55:24.402686+0000 mon.a (mon.0) 1006 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:24 vm03 bash[23394]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:25.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:24 vm03 bash[23394]: cluster 2026-03-10T14:55:24.656597+0000 mon.a (mon.0) 1007 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:25 vm00 bash[28403]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:25 vm00 bash[28403]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:25 vm00 bash[28403]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:25 vm00 bash[28403]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:25 vm00 bash[20726]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:25 vm00 bash[20726]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:25 vm00 bash[20726]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:25.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:25 vm00 bash[20726]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:25 vm03 bash[23394]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:25 vm03 bash[23394]: cluster 2026-03-10T14:55:24.264424+0000 mgr.y (mgr.24425) 144 : cluster [DBG] pgmap v170: 164 pgs: 164 active+clean; 455 KiB data, 362 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:25 vm03 bash[23394]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:25 vm03 bash[23394]: cluster 2026-03-10T14:55:25.663025+0000 mon.a (mon.0) 1008 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T14:55:27.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:27 vm03 bash[23394]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:27 vm03 bash[23394]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:27 vm03 bash[23394]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:27 vm03 bash[23394]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:27 vm00 bash[28403]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:27 vm00 bash[28403]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:27 vm00 bash[28403]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:27 vm00 bash[28403]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:27 vm00 bash[20726]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:27 vm00 bash[20726]: audit 2026-03-10T14:55:25.706625+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.100:0/2271839987' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:27 vm00 bash[20726]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:27 vm00 bash[20726]: audit 2026-03-10T14:55:25.707028+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:27.877 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs PASSED [ 37%] 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:28.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:28 vm03 bash[23394]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:28 vm00 bash[20726]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:26.264736+0000 mgr.y (mgr.24425) 145 : cluster [DBG] pgmap v173: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:26.697234+0000 mon.a (mon.0) 1010 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: audit 2026-03-10T14:55:26.858869+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:27.013785+0000 mon.a (mon.0) 1012 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:28.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:28 vm00 bash[28403]: cluster 2026-03-10T14:55:27.872733+0000 mon.a (mon.0) 1013 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T14:55:29.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:29 vm00 bash[20726]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:29.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:29 vm00 bash[28403]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: cluster 2026-03-10T14:55:28.265083+0000 mgr.y (mgr.24425) 146 : cluster [DBG] pgmap v176: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: audit 2026-03-10T14:55:28.636557+0000 mgr.y (mgr.24425) 147 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:29.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:29 vm03 bash[23394]: cluster 2026-03-10T14:55:28.872048+0000 mon.a (mon.0) 1014 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:30 vm00 bash[28403]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:30 vm00 bash[20726]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: cluster 2026-03-10T14:55:29.902272+0000 mon.a (mon.0) 1015 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: audit 2026-03-10T14:55:29.957682+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.100:0/1048884815' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: audit 2026-03-10T14:55:29.958152+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:30 vm03 bash[23394]: cluster 2026-03-10T14:55:30.265643+0000 mgr.y (mgr.24425) 148 : cluster [DBG] pgmap v179: 196 pgs: 6 unknown, 190 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T14:55:31.938 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_obj_xattrs PASSED [ 38%] 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:31 vm00 bash[20726]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:31 vm00 bash[20726]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:31 vm00 bash[20726]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:31 vm00 bash[20726]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:31 vm00 bash[28403]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:31 vm00 bash[28403]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:31 vm00 bash[28403]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:31 vm00 bash[28403]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:31 vm03 bash[23394]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:31 vm03 bash[23394]: audit 2026-03-10T14:55:30.903585+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:31 vm03 bash[23394]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:32.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:31 vm03 bash[23394]: cluster 2026-03-10T14:55:30.911641+0000 mon.a (mon.0) 1018 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T14:55:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:33 vm03 bash[23394]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:33 vm03 bash[23394]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:33 vm03 bash[23394]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:33.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:33 vm03 bash[23394]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:33.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:33 vm00 bash[28403]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:33 vm00 bash[28403]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:33 vm00 bash[28403]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:33 vm00 bash[28403]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:33 vm00 bash[20726]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:33 vm00 bash[20726]: cluster 2026-03-10T14:55:31.929333+0000 mon.a (mon.0) 1019 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:33 vm00 bash[20726]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:33.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:33 vm00 bash[20726]: cluster 2026-03-10T14:55:32.265942+0000 mgr.y (mgr.24425) 149 : cluster [DBG] pgmap v182: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:34.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:34 vm03 bash[23394]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:34 vm00 bash[28403]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: cluster 2026-03-10T14:55:32.943407+0000 mon.a (mon.0) 1020 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: audit 2026-03-10T14:55:33.136314+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.100:0/1909683773' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: cluster 2026-03-10T14:55:33.137809+0000 mon.a (mon.0) 1021 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:34.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:34 vm00 bash[20726]: audit 2026-03-10T14:55:33.148753+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:35.348 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_id PASSED [ 39%] 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:35.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:35 vm03 bash[23394]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:35 vm00 bash[28403]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:34.266205+0000 mgr.y (mgr.24425) 150 : cluster [DBG] pgmap v184: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: audit 2026-03-10T14:55:34.340704+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:34.398481+0000 mon.a (mon.0) 1024 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:35 vm00 bash[20726]: cluster 2026-03-10T14:55:35.349717+0000 mon.a (mon.0) 1025 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:37 vm00 bash[28403]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:37 vm00 bash[20726]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: cluster 2026-03-10T14:55:36.266503+0000 mgr.y (mgr.24425) 151 : cluster [DBG] pgmap v187: 164 pgs: 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: audit 2026-03-10T14:55:36.360780+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.100:0/3322851469' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: cluster 2026-03-10T14:55:36.361725+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:37.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:37 vm03 bash[23394]: audit 2026-03-10T14:55:36.364619+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:38.406 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_name PASSED [ 40%] 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:38 vm00 bash[28403]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:38.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:38 vm00 bash[20726]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:38.875 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: audit 2026-03-10T14:55:37.400005+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: cluster 2026-03-10T14:55:37.404983+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:38.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:38 vm03 bash[23394]: cluster 2026-03-10T14:55:38.407465+0000 mon.a (mon.0) 1030 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T14:55:39.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:39 vm00 bash[28403]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:39.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:39 vm00 bash[20726]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:38.266874+0000 mgr.y (mgr.24425) 152 : cluster [DBG] pgmap v190: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 363 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:38.418767+0000 mon.a (mon.0) 1031 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: audit 2026-03-10T14:55:38.647435+0000 mgr.y (mgr.24425) 153 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: cluster 2026-03-10T14:55:39.424949+0000 mon.a (mon.0) 1032 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T14:55:39.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:39.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:39 vm03 bash[23394]: audit 2026-03-10T14:55:39.426554+0000 mon.a (mon.0) 1033 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:41 vm00 bash[28403]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:41 vm00 bash[28403]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:41 vm00 bash[28403]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:41 vm00 bash[28403]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:41 vm00 bash[20726]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:41 vm00 bash[20726]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:41 vm00 bash[20726]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:41.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:41 vm00 bash[20726]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:41 vm03 bash[23394]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:41 vm03 bash[23394]: cluster 2026-03-10T14:55:40.267356+0000 mgr.y (mgr.24425) 154 : cluster [DBG] pgmap v193: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:41 vm03 bash[23394]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:41.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:41 vm03 bash[23394]: cluster 2026-03-10T14:55:40.427219+0000 mon.a (mon.0) 1034 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T14:55:42.441 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.441 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:42 vm00 bash[28403]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: cluster 2026-03-10T14:55:41.421454+0000 mon.a (mon.0) 1035 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:42.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:42 vm00 bash[20726]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: audit 2026-03-10T14:55:41.428587+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: audit 2026-03-10T14:55:42.430254+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.100:0/1144154985' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:42 vm03 bash[23394]: cluster 2026-03-10T14:55:42.433044+0000 mon.a (mon.0) 1038 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T14:55:43.442 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_create_snap PASSED [ 41%] 2026-03-10T14:55:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:43 vm00 bash[28403]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:43 vm00 bash[28403]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:43 vm00 bash[28403]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:43 vm00 bash[28403]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:43 vm00 bash[20726]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:43 vm00 bash[20726]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:43 vm00 bash[20726]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:43.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:43 vm00 bash[20726]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:43 vm03 bash[23394]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:43 vm03 bash[23394]: cluster 2026-03-10T14:55:42.267677+0000 mgr.y (mgr.24425) 155 : cluster [DBG] pgmap v196: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:43 vm03 bash[23394]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:43 vm03 bash[23394]: cluster 2026-03-10T14:55:43.442115+0000 mon.a (mon.0) 1039 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T14:55:44.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:44 vm03 bash[23394]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:44 vm00 bash[28403]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: cluster 2026-03-10T14:55:44.457835+0000 mon.a (mon.0) 1040 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: audit 2026-03-10T14:55:44.482945+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.100:0/2652299717' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: cluster 2026-03-10T14:55:44.484595+0000 mon.a (mon.0) 1041 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:44.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:44 vm00 bash[20726]: audit 2026-03-10T14:55:44.493196+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:45.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:45 vm03 bash[23394]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:45 vm00 bash[28403]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: cluster 2026-03-10T14:55:44.267978+0000 mgr.y (mgr.24425) 156 : cluster [DBG] pgmap v199: 164 pgs: 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: audit 2026-03-10T14:55:45.475725+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:45 vm00 bash[20726]: cluster 2026-03-10T14:55:45.485166+0000 mon.a (mon.0) 1044 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T14:55:46.664 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps_empty PASSED [ 42%] 2026-03-10T14:55:47.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:47 vm00 bash[28403]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:47 vm00 bash[28403]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:47 vm00 bash[28403]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:47 vm00 bash[28403]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:47 vm00 bash[20726]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:47 vm00 bash[20726]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:47 vm00 bash[20726]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:47.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:47 vm00 bash[20726]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:47 vm03 bash[23394]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:47 vm03 bash[23394]: cluster 2026-03-10T14:55:46.268303+0000 mgr.y (mgr.24425) 157 : cluster [DBG] pgmap v202: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:47 vm03 bash[23394]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:47 vm03 bash[23394]: cluster 2026-03-10T14:55:46.665554+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T14:55:49.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:48 vm03 bash[23394]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:48 vm03 bash[23394]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:48 vm00 bash[28403]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:48 vm00 bash[28403]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:48 vm00 bash[20726]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:48 vm00 bash[20726]: cluster 2026-03-10T14:55:47.770670+0000 mon.a (mon.0) 1046 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:50.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:49 vm03 bash[23394]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:50.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:49 vm00 bash[28403]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:48.268632+0000 mgr.y (mgr.24425) 158 : cluster [DBG] pgmap v205: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 364 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: audit 2026-03-10T14:55:48.657295+0000 mgr.y (mgr.24425) 159 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:48.781481+0000 mon.a (mon.0) 1047 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:50.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:49 vm00 bash[20726]: cluster 2026-03-10T14:55:49.787598+0000 mon.a (mon.0) 1048 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T14:55:51.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:50 vm00 bash[28403]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:51.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:50 vm00 bash[28403]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:51.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:50 vm00 bash[20726]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:51.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:50 vm00 bash[20726]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:51.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:50 vm03 bash[23394]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:51.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:50 vm03 bash[23394]: cluster 2026-03-10T14:55:50.269078+0000 mgr.y (mgr.24425) 160 : cluster [DBG] pgmap v208: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:52.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:51 vm00 bash[28403]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:51 vm00 bash[20726]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: cluster 2026-03-10T14:55:50.930412+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: audit 2026-03-10T14:55:50.943907+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.100:0/2739473472' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:51 vm03 bash[23394]: audit 2026-03-10T14:55:50.952503+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:52.956 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps PASSED [ 43%] 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:53.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:52 vm03 bash[23394]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:52 vm00 bash[28403]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: audit 2026-03-10T14:55:51.949217+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:51.953806+0000 mon.a (mon.0) 1052 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:52.269384+0000 mgr.y (mgr.24425) 161 : cluster [DBG] pgmap v211: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:53.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:52 vm00 bash[20726]: cluster 2026-03-10T14:55:52.957078+0000 mon.a (mon.0) 1053 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T14:55:54.038 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:55:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:55:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:55:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:54 vm03 bash[23394]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:54.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:54 vm03 bash[23394]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:54.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:54 vm00 bash[28403]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:54.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:54 vm00 bash[28403]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:54.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:54 vm00 bash[20726]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:54.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:54 vm00 bash[20726]: cluster 2026-03-10T14:55:53.443871+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:55.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:55 vm03 bash[23394]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:55 vm00 bash[28403]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:54.022291+0000 mon.a (mon.0) 1055 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:54.269806+0000 mgr.y (mgr.24425) 162 : cluster [DBG] pgmap v214: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: audit 2026-03-10T14:55:54.438092+0000 mon.a (mon.0) 1056 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: cluster 2026-03-10T14:55:55.045395+0000 mon.a (mon.0) 1057 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:55.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:55 vm00 bash[20726]: audit 2026-03-10T14:55:55.045768+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.100:0/177711750' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:56.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:56 vm03 bash[23394]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:56.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:56 vm00 bash[28403]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: audit 2026-03-10T14:55:55.059186+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: audit 2026-03-10T14:55:56.024852+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:56.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:56 vm00 bash[20726]: cluster 2026-03-10T14:55:56.029434+0000 mon.a (mon.0) 1060 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T14:55:57.043 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lookup_snap PASSED [ 45%] 2026-03-10T14:55:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:57 vm03 bash[23394]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:57 vm03 bash[23394]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:57 vm03 bash[23394]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:57.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:57 vm03 bash[23394]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:57.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:57 vm00 bash[28403]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:57 vm00 bash[28403]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:57 vm00 bash[28403]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:57 vm00 bash[28403]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:57 vm00 bash[20726]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:57 vm00 bash[20726]: cluster 2026-03-10T14:55:56.270141+0000 mgr.y (mgr.24425) 163 : cluster [DBG] pgmap v217: 196 pgs: 196 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:57 vm00 bash[20726]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:57.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:57 vm00 bash[20726]: cluster 2026-03-10T14:55:57.038461+0000 mon.a (mon.0) 1061 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T14:55:59.095 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:55:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:59.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:55:59 vm03 bash[23394]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:55:59 vm00 bash[28403]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: cluster 2026-03-10T14:55:58.078827+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: cluster 2026-03-10T14:55:58.270480+0000 mgr.y (mgr.24425) 164 : cluster [DBG] pgmap v220: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 365 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:55:59.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:55:59 vm00 bash[20726]: audit 2026-03-10T14:55:58.668130+0000 mgr.y (mgr.24425) 165 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:00.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:00 vm03 bash[23394]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:00.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:00 vm00 bash[28403]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: cluster 2026-03-10T14:55:59.076106+0000 mon.a (mon.0) 1063 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:55:59.082152+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.100:0/1850627249' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:55:59.096883+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: audit 2026-03-10T14:56:00.073770+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:00.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:00 vm00 bash[20726]: cluster 2026-03-10T14:56:00.076566+0000 mon.a (mon.0) 1066 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T14:56:01.271 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_timestamp PASSED [ 46%] 2026-03-10T14:56:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:01 vm03 bash[23394]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:01.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:01 vm03 bash[23394]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:01.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:01 vm00 bash[28403]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:01.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:01 vm00 bash[28403]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:01.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:01 vm00 bash[20726]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:01.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:01 vm00 bash[20726]: cluster 2026-03-10T14:56:00.271040+0000 mgr.y (mgr.24425) 166 : cluster [DBG] pgmap v223: 196 pgs: 196 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:02 vm03 bash[23394]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:02.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:02 vm03 bash[23394]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:02.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:02 vm00 bash[28403]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:02.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:02 vm00 bash[28403]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:02.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:02 vm00 bash[20726]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:02.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:02 vm00 bash[20726]: cluster 2026-03-10T14:56:01.270878+0000 mon.a (mon.0) 1067 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:03 vm03 bash[23394]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:03 vm00 bash[28403]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.271386+0000 mgr.y (mgr.24425) 167 : cluster [DBG] pgmap v225: 164 pgs: 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.279904+0000 mon.a (mon.0) 1068 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:03 vm00 bash[20726]: cluster 2026-03-10T14:56:02.308730+0000 mon.a (mon.0) 1069 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T14:56:05.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:04 vm00 bash[20726]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:04 vm00 bash[20726]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:04 vm00 bash[20726]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:05.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:04 vm00 bash[20726]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:05.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:04 vm00 bash[28403]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:04 vm00 bash[28403]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:04 vm00 bash[28403]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:05.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:04 vm00 bash[28403]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:04 vm03 bash[23394]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:04 vm03 bash[23394]: cluster 2026-03-10T14:56:03.465548+0000 mon.a (mon.0) 1070 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T14:56:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:04 vm03 bash[23394]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:04 vm03 bash[23394]: cluster 2026-03-10T14:56:04.535316+0000 mon.a (mon.0) 1071 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:05 vm03 bash[23394]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:05 vm00 bash[28403]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: cluster 2026-03-10T14:56:04.271776+0000 mgr.y (mgr.24425) 168 : cluster [DBG] pgmap v228: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 366 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:04.539754+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.100:0/2372793240' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:04.637707+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: audit 2026-03-10T14:56:05.659480+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:05 vm00 bash[20726]: cluster 2026-03-10T14:56:05.670345+0000 mon.a (mon.0) 1074 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T14:56:06.707 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_snap PASSED [ 47%] 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:06 vm03 bash[23394]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:06 vm00 bash[20726]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: cluster 2026-03-10T14:56:06.272087+0000 mgr.y (mgr.24425) 169 : cluster [DBG] pgmap v231: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: audit 2026-03-10T14:56:06.620042+0000 mon.a (mon.0) 1075 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:56:07.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:07.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:06 vm00 bash[28403]: cluster 2026-03-10T14:56:06.696115+0000 mon.a (mon.0) 1076 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:08 vm03 bash[23394]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:08 vm00 bash[28403]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.010157+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.014969+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.360662+0000 mon.a (mon.0) 1079 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.361498+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: audit 2026-03-10T14:56:07.367726+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:08 vm00 bash[20726]: cluster 2026-03-10T14:56:07.692588+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T14:56:09.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:09.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:09 vm03 bash[23394]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:09 vm00 bash[28403]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.272422+0000 mgr.y (mgr.24425) 170 : cluster [DBG] pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.445682+0000 mon.a (mon.0) 1083 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:09.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: audit 2026-03-10T14:56:08.672614+0000 mgr.y (mgr.24425) 171 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:09.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:09.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:09 vm00 bash[20726]: cluster 2026-03-10T14:56:08.703474+0000 mon.a (mon.0) 1084 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T14:56:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:10 vm03 bash[23394]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:10 vm03 bash[23394]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:10 vm03 bash[23394]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:10 vm03 bash[23394]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:10.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:10 vm00 bash[28403]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:10 vm00 bash[28403]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:10 vm00 bash[28403]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:10.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:10 vm00 bash[28403]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:10.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:10 vm00 bash[20726]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:10 vm00 bash[20726]: audit 2026-03-10T14:56:09.443860+0000 mon.a (mon.0) 1085 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:10.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:10 vm00 bash[20726]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:10.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:10 vm00 bash[20726]: cluster 2026-03-10T14:56:09.707877+0000 mon.a (mon.0) 1086 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:11 vm00 bash[28403]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:11.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:11.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:11 vm00 bash[20726]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: cluster 2026-03-10T14:56:10.273140+0000 mgr.y (mgr.24425) 172 : cluster [DBG] pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: cluster 2026-03-10T14:56:10.713841+0000 mon.a (mon.0) 1087 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: audit 2026-03-10T14:56:10.716584+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.100:0/192768000' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:11 vm03 bash[23394]: audit 2026-03-10T14:56:10.720622+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:12.722 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback PASSED [ 48%] 2026-03-10T14:56:13.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:12 vm03 bash[23394]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:12 vm03 bash[23394]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:12 vm03 bash[23394]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:13.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:12 vm03 bash[23394]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:12 vm00 bash[28403]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:12 vm00 bash[28403]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:12 vm00 bash[28403]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:12 vm00 bash[28403]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:12 vm00 bash[20726]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:12 vm00 bash[20726]: audit 2026-03-10T14:56:11.711713+0000 mon.a (mon.0) 1089 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:12 vm00 bash[20726]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:13.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:12 vm00 bash[20726]: cluster 2026-03-10T14:56:11.719053+0000 mon.a (mon.0) 1090 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:13 vm03 bash[23394]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:14.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:13 vm00 bash[20726]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:12.273425+0000 mgr.y (mgr.24425) 173 : cluster [DBG] pgmap v240: 196 pgs: 196 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:12.719956+0000 mon.a (mon.0) 1091 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:14.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:13 vm00 bash[28403]: cluster 2026-03-10T14:56:13.742814+0000 mon.a (mon.0) 1092 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T14:56:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:15 vm03 bash[23394]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:15 vm03 bash[23394]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:15 vm03 bash[23394]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:15 vm03 bash[23394]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:15 vm00 bash[20726]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:15 vm00 bash[20726]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:15 vm00 bash[20726]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:15 vm00 bash[20726]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:15 vm00 bash[28403]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:15 vm00 bash[28403]: cluster 2026-03-10T14:56:14.273693+0000 mgr.y (mgr.24425) 174 : cluster [DBG] pgmap v243: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 384 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:56:16.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:15 vm00 bash[28403]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:16.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:15 vm00 bash[28403]: cluster 2026-03-10T14:56:14.742119+0000 mon.a (mon.0) 1093 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T14:56:17.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:16 vm03 bash[23394]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:17.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:16 vm03 bash[23394]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:17.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:16 vm00 bash[20726]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:17.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:16 vm00 bash[20726]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:17.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:16 vm00 bash[28403]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:17.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:16 vm00 bash[28403]: cluster 2026-03-10T14:56:15.741274+0000 mon.a (mon.0) 1094 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:17 vm03 bash[23394]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:17 vm00 bash[20726]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: cluster 2026-03-10T14:56:16.273968+0000 mgr.y (mgr.24425) 175 : cluster [DBG] pgmap v246: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: cluster 2026-03-10T14:56:16.764538+0000 mon.a (mon.0) 1095 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: audit 2026-03-10T14:56:16.765036+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.100:0/2349985415' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:17 vm00 bash[28403]: audit 2026-03-10T14:56:16.769099+0000 mon.a (mon.0) 1096 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:18.823 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback_removed PASSED [ 49%] 2026-03-10T14:56:19.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:19.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:18 vm03 bash[23394]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:19.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.222 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:18 vm00 bash[20726]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: audit 2026-03-10T14:56:17.808341+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:17.812498+0000 mon.a (mon.0) 1098 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:18.274311+0000 mgr.y (mgr.24425) 176 : cluster [DBG] pgmap v249: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: audit 2026-03-10T14:56:18.676382+0000 mgr.y (mgr.24425) 177 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:19.229 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:18 vm00 bash[28403]: cluster 2026-03-10T14:56:18.825187+0000 mon.a (mon.0) 1099 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T14:56:21.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:21 vm03 bash[23394]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:21 vm03 bash[23394]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:21 vm03 bash[23394]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:21.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:21 vm03 bash[23394]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:21 vm00 bash[28403]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:21 vm00 bash[28403]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:21 vm00 bash[28403]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:21 vm00 bash[28403]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:21 vm00 bash[20726]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:21 vm00 bash[20726]: cluster 2026-03-10T14:56:20.077306+0000 mon.a (mon.0) 1100 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:21 vm00 bash[20726]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:21.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:21 vm00 bash[20726]: cluster 2026-03-10T14:56:20.274785+0000 mgr.y (mgr.24425) 178 : cluster [DBG] pgmap v252: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:22.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:22 vm03 bash[23394]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:22.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:22 vm03 bash[23394]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:22 vm00 bash[20726]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:22 vm00 bash[20726]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:22 vm00 bash[28403]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:22 vm00 bash[28403]: cluster 2026-03-10T14:56:21.048089+0000 mon.a (mon.0) 1101 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T14:56:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:23 vm03 bash[23394]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:23 vm03 bash[23394]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:23 vm03 bash[23394]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:23 vm03 bash[23394]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:23 vm00 bash[20726]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:23 vm00 bash[20726]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:23 vm00 bash[20726]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:23 vm00 bash[20726]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:23 vm00 bash[28403]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:23 vm00 bash[28403]: cluster 2026-03-10T14:56:22.275099+0000 mgr.y (mgr.24425) 179 : cluster [DBG] pgmap v254: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:23 vm00 bash[28403]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:23 vm00 bash[28403]: cluster 2026-03-10T14:56:22.304699+0000 mon.a (mon.0) 1102 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T14:56:24.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:24 vm00 bash[20726]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:24 vm00 bash[28403]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: cluster 2026-03-10T14:56:23.308530+0000 mon.a (mon.0) 1103 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: audit 2026-03-10T14:56:23.325514+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.100:0/4108912219' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:24 vm03 bash[23394]: audit 2026-03-10T14:56:23.329827+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:25.633 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_read PASSED [ 50%] 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:26.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:25 vm03 bash[23394]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:25 vm00 bash[20726]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: cluster 2026-03-10T14:56:24.275399+0000 mgr.y (mgr.24425) 180 : cluster [DBG] pgmap v257: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: audit 2026-03-10T14:56:24.394004+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: cluster 2026-03-10T14:56:24.404465+0000 mon.a (mon.0) 1106 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:26.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:25 vm00 bash[28403]: audit 2026-03-10T14:56:24.449392+0000 mon.a (mon.0) 1107 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:26 vm03 bash[23394]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:26 vm03 bash[23394]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:26 vm03 bash[23394]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:27.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:26 vm03 bash[23394]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:27.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:26 vm00 bash[20726]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:26 vm00 bash[20726]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:26 vm00 bash[20726]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:27.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:26 vm00 bash[20726]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:26 vm00 bash[28403]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:26 vm00 bash[28403]: cluster 2026-03-10T14:56:25.593507+0000 mon.a (mon.0) 1108 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T14:56:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:26 vm00 bash[28403]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:27.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:26 vm00 bash[28403]: cluster 2026-03-10T14:56:26.467019+0000 mon.a (mon.0) 1109 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:28.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:27 vm03 bash[23394]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:27 vm00 bash[20726]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:26.275679+0000 mgr.y (mgr.24425) 181 : cluster [DBG] pgmap v260: 164 pgs: 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:26.727664+0000 mon.a (mon.0) 1110 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: cluster 2026-03-10T14:56:27.457166+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T14:56:28.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:28.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:27 vm00 bash[28403]: audit 2026-03-10T14:56:27.495109+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:29.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:29.510 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap PASSED [ 51%] 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:29 vm03 bash[23394]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:29 vm00 bash[28403]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: cluster 2026-03-10T14:56:28.275988+0000 mgr.y (mgr.24425) 182 : cluster [DBG] pgmap v263: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 439 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: audit 2026-03-10T14:56:28.464653+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? 192.168.123.100:0/936642463' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: cluster 2026-03-10T14:56:28.470932+0000 mon.a (mon.0) 1114 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:29 vm00 bash[20726]: audit 2026-03-10T14:56:28.687219+0000 mgr.y (mgr.24425) 183 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:30 vm03 bash[23394]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:30 vm03 bash[23394]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:30 vm00 bash[28403]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:30 vm00 bash[28403]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:30 vm00 bash[20726]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:30 vm00 bash[20726]: cluster 2026-03-10T14:56:29.510827+0000 mon.a (mon.0) 1115 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T14:56:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:31 vm03 bash[23394]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:31 vm03 bash[23394]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:31 vm03 bash[23394]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:31 vm03 bash[23394]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:31 vm00 bash[20726]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:31 vm00 bash[20726]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:31 vm00 bash[20726]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:31 vm00 bash[20726]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:31 vm00 bash[28403]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:31 vm00 bash[28403]: cluster 2026-03-10T14:56:30.276413+0000 mgr.y (mgr.24425) 184 : cluster [DBG] pgmap v266: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:31 vm00 bash[28403]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:31.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:31 vm00 bash[28403]: cluster 2026-03-10T14:56:30.555803+0000 mon.a (mon.0) 1116 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:32 vm03 bash[23394]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:32 vm00 bash[20726]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: cluster 2026-03-10T14:56:31.552590+0000 mon.a (mon.0) 1117 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: audit 2026-03-10T14:56:31.595124+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.100:0/2348510622' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:32 vm00 bash[28403]: audit 2026-03-10T14:56:31.595521+0000 mon.a (mon.0) 1118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:33.596 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap_aio PASSED [ 52%] 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:33 vm03 bash[23394]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:33 vm00 bash[28403]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:33.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.276697+0000 mgr.y (mgr.24425) 185 : cluster [DBG] pgmap v269: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.560729+0000 mon.a (mon.0) 1119 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: audit 2026-03-10T14:56:32.570657+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:33.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:33 vm00 bash[20726]: cluster 2026-03-10T14:56:32.586116+0000 mon.a (mon.0) 1121 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T14:56:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:34 vm00 bash[28403]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:34 vm00 bash[28403]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:34 vm00 bash[20726]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:34 vm00 bash[20726]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:35.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:34 vm03 bash[23394]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:35.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:34 vm03 bash[23394]: cluster 2026-03-10T14:56:33.597901+0000 mon.a (mon.0) 1122 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T14:56:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:35 vm03 bash[23394]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:35 vm03 bash[23394]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.133 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:35 vm03 bash[23394]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:36.133 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:35 vm03 bash[23394]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:35 vm00 bash[20726]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:35 vm00 bash[20726]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:35 vm00 bash[20726]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:35 vm00 bash[20726]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:35 vm00 bash[28403]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:35 vm00 bash[28403]: cluster 2026-03-10T14:56:34.276956+0000 mgr.y (mgr.24425) 186 : cluster [DBG] pgmap v272: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:35 vm00 bash[28403]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:35 vm00 bash[28403]: cluster 2026-03-10T14:56:34.699927+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:36 vm03 bash[23394]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:36 vm00 bash[20726]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: cluster 2026-03-10T14:56:35.751602+0000 mon.a (mon.0) 1124 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: audit 2026-03-10T14:56:36.025540+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.100:0/3621528241' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:36 vm00 bash[28403]: audit 2026-03-10T14:56:36.026145+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:37.793 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_ops PASSED [ 53%] 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:37.805 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:37 vm00 bash[28403]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: cluster 2026-03-10T14:56:36.277371+0000 mgr.y (mgr.24425) 187 : cluster [DBG] pgmap v275: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:37 vm03 bash[23394]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:38.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:38.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: audit 2026-03-10T14:56:36.777813+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:38.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:38.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:37 vm00 bash[20726]: cluster 2026-03-10T14:56:36.797361+0000 mon.a (mon.0) 1127 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T14:56:38.950 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:38 vm00 bash[28403]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:38 vm00 bash[20726]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: cluster 2026-03-10T14:56:37.791400+0000 mon.a (mon.0) 1128 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: cluster 2026-03-10T14:56:38.277676+0000 mgr.y (mgr.24425) 188 : cluster [DBG] pgmap v278: 164 pgs: 164 active+clean; 455 KiB data, 483 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:39.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:38 vm03 bash[23394]: audit 2026-03-10T14:56:38.691255+0000 mgr.y (mgr.24425) 189 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:39 vm00 bash[28403]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:40.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:39 vm00 bash[20726]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: cluster 2026-03-10T14:56:38.838889+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: cluster 2026-03-10T14:56:39.034171+0000 mon.a (mon.0) 1130 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:40.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:39 vm03 bash[23394]: audit 2026-03-10T14:56:39.455149+0000 mon.a (mon.0) 1131 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:41 vm00 bash[28403]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:41 vm00 bash[20726]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: cluster 2026-03-10T14:56:40.022899+0000 mon.a (mon.0) 1132 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: audit 2026-03-10T14:56:40.057375+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.100:0/972767421' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: audit 2026-03-10T14:56:40.057657+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:41 vm03 bash[23394]: cluster 2026-03-10T14:56:40.278138+0000 mgr.y (mgr.24425) 190 : cluster [DBG] pgmap v281: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T14:56:42.554 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute_op PASSED [ 54%] 2026-03-10T14:56:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:42 vm03 bash[23394]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:42 vm03 bash[23394]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:42 vm03 bash[23394]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:42.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:42 vm03 bash[23394]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:42.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:42 vm00 bash[28403]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.969 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:42 vm00 bash[28403]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.969 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:42 vm00 bash[28403]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:42.969 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:42 vm00 bash[28403]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:42.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:42 vm00 bash[20726]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:42 vm00 bash[20726]: audit 2026-03-10T14:56:41.109453+0000 mon.a (mon.0) 1134 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:42.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:42 vm00 bash[20726]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:42.970 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:42 vm00 bash[20726]: cluster 2026-03-10T14:56:41.114376+0000 mon.a (mon.0) 1135 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T14:56:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:43 vm03 bash[23394]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:43 vm03 bash[23394]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:43 vm03 bash[23394]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:43.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:43 vm03 bash[23394]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:43 vm00 bash[28403]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:43 vm00 bash[28403]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:43 vm00 bash[28403]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:43 vm00 bash[28403]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:43 vm00 bash[20726]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:43 vm00 bash[20726]: cluster 2026-03-10T14:56:42.278449+0000 mgr.y (mgr.24425) 191 : cluster [DBG] pgmap v283: 196 pgs: 196 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 2 op/s 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:43 vm00 bash[20726]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:43.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:43 vm00 bash[20726]: cluster 2026-03-10T14:56:42.547345+0000 mon.a (mon.0) 1136 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T14:56:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:44 vm03 bash[23394]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:44 vm03 bash[23394]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:44 vm03 bash[23394]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:44.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:44 vm03 bash[23394]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:44 vm00 bash[28403]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:44 vm00 bash[28403]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:44 vm00 bash[28403]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:44 vm00 bash[28403]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:44 vm00 bash[20726]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:44 vm00 bash[20726]: cluster 2026-03-10T14:56:43.579013+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:44 vm00 bash[20726]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:44.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:44 vm00 bash[20726]: cluster 2026-03-10T14:56:44.590847+0000 mon.a (mon.0) 1138 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:45 vm00 bash[28403]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:45.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:45.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:45 vm00 bash[20726]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: cluster 2026-03-10T14:56:44.278807+0000 mgr.y (mgr.24425) 192 : cluster [DBG] pgmap v286: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 488 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:44.630978+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.100:0/22867501' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:44.635939+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: audit 2026-03-10T14:56:45.582407+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:46.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:45 vm03 bash[23394]: cluster 2026-03-10T14:56:45.585676+0000 mon.a (mon.0) 1141 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T14:56:46.771 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame_op PASSED [ 56%] 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:47 vm00 bash[28403]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:47 vm00 bash[28403]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:47 vm00 bash[28403]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:47 vm00 bash[28403]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:47 vm00 bash[20726]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:47 vm00 bash[20726]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:47 vm00 bash[20726]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:47.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:47 vm00 bash[20726]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:47 vm03 bash[23394]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:47 vm03 bash[23394]: cluster 2026-03-10T14:56:46.279082+0000 mgr.y (mgr.24425) 193 : cluster [DBG] pgmap v289: 196 pgs: 196 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:56:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:47 vm03 bash[23394]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:48.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:47 vm03 bash[23394]: cluster 2026-03-10T14:56:46.769178+0000 mon.a (mon.0) 1142 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T14:56:49.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:49.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:48 vm03 bash[23394]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:49.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:48 vm00 bash[28403]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:47.810658+0000 mon.a (mon.0) 1143 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:48.279472+0000 mgr.y (mgr.24425) 194 : cluster [DBG] pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 496 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: cluster 2026-03-10T14:56:48.451721+0000 mon.a (mon.0) 1144 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:49.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:48 vm00 bash[20726]: audit 2026-03-10T14:56:48.700004+0000 mgr.y (mgr.24425) 195 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:49 vm03 bash[23394]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:49 vm00 bash[28403]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: cluster 2026-03-10T14:56:48.809538+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: audit 2026-03-10T14:56:48.844235+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.100:0/744936715' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:50.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:49 vm00 bash[20726]: audit 2026-03-10T14:56:48.848431+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:51.061 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_vals_by_keys PASSED [ 57%] 2026-03-10T14:56:51.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:51.385 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:51 vm03 bash[23394]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:51 vm00 bash[28403]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: audit 2026-03-10T14:56:49.959767+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: cluster 2026-03-10T14:56:49.970411+0000 mon.a (mon.0) 1148 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:51.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:51 vm00 bash[20726]: cluster 2026-03-10T14:56:50.280062+0000 mgr.y (mgr.24425) 196 : cluster [DBG] pgmap v295: 196 pgs: 196 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:52 vm03 bash[23394]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:52.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:52 vm03 bash[23394]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:52 vm00 bash[28403]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:52 vm00 bash[28403]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:52.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:52 vm00 bash[20726]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:52.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:52 vm00 bash[20726]: cluster 2026-03-10T14:56:51.057534+0000 mon.a (mon.0) 1149 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:53.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:53 vm03 bash[23394]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:53 vm00 bash[28403]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:52.248313+0000 mon.a (mon.0) 1150 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:52.280445+0000 mgr.y (mgr.24425) 197 : cluster [DBG] pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:53.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:53 vm00 bash[20726]: cluster 2026-03-10T14:56:53.246093+0000 mon.a (mon.0) 1151 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T14:56:54.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:56:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:56:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:54 vm00 bash[28403]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:54 vm00 bash[28403]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:54 vm00 bash[28403]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:54 vm00 bash[28403]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:54 vm00 bash[20726]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:54 vm00 bash[20726]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:54 vm00 bash[20726]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:54 vm00 bash[20726]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:54 vm03 bash[23394]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:54 vm03 bash[23394]: audit 2026-03-10T14:56:53.268734+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.100:0/291738048' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:54 vm03 bash[23394]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:54.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:54 vm03 bash[23394]: audit 2026-03-10T14:56:53.283907+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:55.420 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_keys PASSED [ 58%] 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:55 vm00 bash[28403]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:55 vm00 bash[20726]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: cluster 2026-03-10T14:56:54.280748+0000 mgr.y (mgr.24425) 198 : cluster [DBG] pgmap v300: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 501 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: audit 2026-03-10T14:56:54.370334+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: cluster 2026-03-10T14:56:54.377688+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:55 vm03 bash[23394]: audit 2026-03-10T14:56:54.461669+0000 mon.a (mon.0) 1155 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:56:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:56 vm03 bash[23394]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:56 vm03 bash[23394]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:56.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:56 vm00 bash[28403]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:56.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:56 vm00 bash[28403]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:56.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:56 vm00 bash[20726]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:56.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:56 vm00 bash[20726]: cluster 2026-03-10T14:56:55.426751+0000 mon.a (mon.0) 1156 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:57 vm00 bash[28403]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:57 vm00 bash[20726]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.281008+0000 mgr.y (mgr.24425) 199 : cluster [DBG] pgmap v303: 164 pgs: 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.431375+0000 mon.a (mon.0) 1157 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:58.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:57 vm03 bash[23394]: cluster 2026-03-10T14:56:56.619669+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T14:56:58.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:58 vm00 bash[28403]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:58 vm00 bash[20726]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:59.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:56:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: cluster 2026-03-10T14:56:57.614794+0000 mon.a (mon.0) 1159 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:57.653485+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.100:0/371744448' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:57.657685+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:56:59.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:59.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: audit 2026-03-10T14:56:58.613327+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:56:59.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:59.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:58 vm03 bash[23394]: cluster 2026-03-10T14:56:58.619434+0000 mon.a (mon.0) 1162 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T14:56:59.624 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_clear_omap PASSED [ 59%] 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:56:59 vm00 bash[28403]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:56:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:56:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:56:59 vm00 bash[20726]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: cluster 2026-03-10T14:56:58.281359+0000 mgr.y (mgr.24425) 200 : cluster [DBG] pgmap v306: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 505 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: audit 2026-03-10T14:56:58.710852+0000 mgr.y (mgr.24425) 201 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:57:00.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:56:59 vm03 bash[23394]: cluster 2026-03-10T14:56:59.620016+0000 mon.a (mon.0) 1163 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T14:57:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:01 vm03 bash[23394]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:01.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:01 vm03 bash[23394]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:01.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:01 vm00 bash[28403]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:01.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:01 vm00 bash[28403]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:01.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:01 vm00 bash[20726]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:01.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:01 vm00 bash[20726]: cluster 2026-03-10T14:57:00.281899+0000 mgr.y (mgr.24425) 202 : cluster [DBG] pgmap v309: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:02.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:02 vm03 bash[23394]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:02 vm00 bash[28403]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: cluster 2026-03-10T14:57:00.959035+0000 mon.a (mon.0) 1164 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: cluster 2026-03-10T14:57:01.881993+0000 mon.a (mon.0) 1165 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:02.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:02 vm00 bash[20726]: audit 2026-03-10T14:57:01.914370+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:03 vm03 bash[23394]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:03 vm00 bash[28403]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: cluster 2026-03-10T14:57:02.282221+0000 mgr.y (mgr.24425) 203 : cluster [DBG] pgmap v312: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: audit 2026-03-10T14:57:02.881031+0000 mon.a (mon.0) 1167 : audit [INF] from='client.? 192.168.123.100:0/961029102' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:03 vm00 bash[20726]: cluster 2026-03-10T14:57:02.884729+0000 mon.a (mon.0) 1168 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T14:57:03.961 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_omap_range2 PASSED [ 60%] 2026-03-10T14:57:04.077 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:04 vm03 bash[23394]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:04 vm03 bash[23394]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:04 vm03 bash[23394]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:04.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:04 vm03 bash[23394]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:04 vm00 bash[20726]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:04 vm00 bash[20726]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:04 vm00 bash[20726]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:04 vm00 bash[20726]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:04 vm00 bash[28403]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:04 vm00 bash[28403]: cluster 2026-03-10T14:57:03.069585+0000 mon.a (mon.0) 1169 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:04 vm00 bash[28403]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:04.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:04 vm00 bash[28403]: cluster 2026-03-10T14:57:03.963365+0000 mon.a (mon.0) 1170 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T14:57:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:05 vm03 bash[23394]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:05 vm03 bash[23394]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:05 vm03 bash[23394]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:05.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:05 vm03 bash[23394]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:05 vm00 bash[28403]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:05 vm00 bash[28403]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:05 vm00 bash[28403]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:05 vm00 bash[28403]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:05 vm00 bash[20726]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:05 vm00 bash[20726]: cluster 2026-03-10T14:57:04.282591+0000 mgr.y (mgr.24425) 204 : cluster [DBG] pgmap v315: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:05 vm00 bash[20726]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:05.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:05 vm00 bash[20726]: cluster 2026-03-10T14:57:04.962193+0000 mon.a (mon.0) 1171 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:06 vm03 bash[23394]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:06 vm00 bash[28403]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: cluster 2026-03-10T14:57:05.973444+0000 mon.a (mon.0) 1172 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: audit 2026-03-10T14:57:06.010205+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.100:0/3871879305' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: audit 2026-03-10T14:57:06.014478+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:06 vm00 bash[20726]: cluster 2026-03-10T14:57:06.282944+0000 mgr.y (mgr.24425) 205 : cluster [DBG] pgmap v318: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:07.984 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_omap_cmp PASSED [ 61%] 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:07 vm03 bash[23394]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:08 vm00 bash[28403]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:06.972800+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: cluster 2026-03-10T14:57:06.981872+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.408740+0000 mon.a (mon.0) 1176 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:57:08.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.807809+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:08.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:08 vm00 bash[20726]: audit 2026-03-10T14:57:07.814498+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.100 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:09.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:09 vm03 bash[23394]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:09 vm00 bash[28403]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: cluster 2026-03-10T14:57:07.979813+0000 mon.a (mon.0) 1179 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.167939+0000 mon.a (mon.0) 1180 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.168579+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.177510+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:57:09.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: cluster 2026-03-10T14:57:08.283253+0000 mgr.y (mgr.24425) 206 : cluster [DBG] pgmap v321: 164 pgs: 164 active+clean; 455 KiB data, 506 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:09.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:09.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:09 vm00 bash[20726]: audit 2026-03-10T14:57:08.717639+0000 mgr.y (mgr.24425) 207 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:10 vm00 bash[28403]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:10.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:10 vm00 bash[20726]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: cluster 2026-03-10T14:57:09.117692+0000 mon.a (mon.0) 1183 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: cluster 2026-03-10T14:57:09.174463+0000 mon.a (mon.0) 1184 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:10.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:10 vm03 bash[23394]: audit 2026-03-10T14:57:09.467507+0000 mon.a (mon.0) 1185 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:11 vm00 bash[28403]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:11.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:11 vm00 bash[20726]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: cluster 2026-03-10T14:57:10.128142+0000 mon.a (mon.0) 1186 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: audit 2026-03-10T14:57:10.161767+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:11.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:11 vm03 bash[23394]: cluster 2026-03-10T14:57:10.283766+0000 mgr.y (mgr.24425) 208 : cluster [DBG] pgmap v324: 196 pgs: 196 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T14:57:12.152 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext_op PASSED [ 62%] 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:12 vm00 bash[28403]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:12.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:12 vm00 bash[20726]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: audit 2026-03-10T14:57:11.139925+0000 mon.a (mon.0) 1188 : audit [INF] from='client.? 192.168.123.100:0/1930553808' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: cluster 2026-03-10T14:57:11.142755+0000 mon.a (mon.0) 1189 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:12.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:12 vm03 bash[23394]: cluster 2026-03-10T14:57:12.154648+0000 mon.a (mon.0) 1190 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T14:57:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:13 vm00 bash[28403]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:13 vm00 bash[28403]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:13 vm00 bash[20726]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:13 vm00 bash[20726]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:13 vm03 bash[23394]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:13.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:13 vm03 bash[23394]: cluster 2026-03-10T14:57:12.284034+0000 mgr.y (mgr.24425) 209 : cluster [DBG] pgmap v327: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:14.204 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:14.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:14 vm00 bash[28403]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:14.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:14 vm00 bash[28403]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:14.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:14 vm00 bash[20726]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:14.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:14 vm00 bash[20726]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:14.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:14 vm03 bash[23394]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:14.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:14 vm03 bash[23394]: cluster 2026-03-10T14:57:13.193409+0000 mon.a (mon.0) 1191 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:15 vm00 bash[28403]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:15.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:15 vm00 bash[20726]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:14.181939+0000 mon.a (mon.0) 1192 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: audit 2026-03-10T14:57:14.244445+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:14.284405+0000 mgr.y (mgr.24425) 210 : cluster [DBG] pgmap v330: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: audit 2026-03-10T14:57:15.191257+0000 mon.a (mon.0) 1194 : audit [INF] from='client.? 192.168.123.100:0/1638248596' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:15.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:15 vm03 bash[23394]: cluster 2026-03-10T14:57:15.195332+0000 mon.a (mon.0) 1195 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T14:57:16.197 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs_op PASSED [ 63%] 2026-03-10T14:57:16.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:16 vm03 bash[23394]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.652 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:16 vm03 bash[23394]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.652 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:16 vm03 bash[23394]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:16.653 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:16 vm03 bash[23394]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:16.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:16 vm00 bash[28403]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:16 vm00 bash[28403]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:16 vm00 bash[28403]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:16 vm00 bash[28403]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:16 vm00 bash[20726]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:16 vm00 bash[20726]: cluster 2026-03-10T14:57:15.207610+0000 mon.a (mon.0) 1196 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:16 vm00 bash[20726]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:16.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:16 vm00 bash[20726]: cluster 2026-03-10T14:57:16.198784+0000 mon.a (mon.0) 1197 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T14:57:17.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:17 vm03 bash[23394]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:17 vm03 bash[23394]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:17 vm03 bash[23394]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:17.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:17 vm03 bash[23394]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:17 vm00 bash[28403]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:17 vm00 bash[28403]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:17 vm00 bash[28403]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:17 vm00 bash[28403]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:17 vm00 bash[20726]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:17 vm00 bash[20726]: cluster 2026-03-10T14:57:16.284750+0000 mgr.y (mgr.24425) 211 : cluster [DBG] pgmap v333: 164 pgs: 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:17 vm00 bash[20726]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:17.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:17 vm00 bash[20726]: cluster 2026-03-10T14:57:17.199868+0000 mon.a (mon.0) 1198 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T14:57:19.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:19 vm00 bash[28403]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:19 vm00 bash[20726]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: cluster 2026-03-10T14:57:18.213620+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: audit 2026-03-10T14:57:18.264684+0000 mon.a (mon.0) 1200 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: cluster 2026-03-10T14:57:18.285056+0000 mgr.y (mgr.24425) 212 : cluster [DBG] pgmap v336: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 507 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:19 vm03 bash[23394]: audit 2026-03-10T14:57:18.722165+0000 mgr.y (mgr.24425) 213 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:20.382 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_locator PASSED [ 64%] 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:20 vm00 bash[28403]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:20 vm00 bash[28403]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:20 vm00 bash[28403]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:20 vm00 bash[28403]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:20 vm00 bash[20726]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:20 vm00 bash[20726]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:20 vm00 bash[20726]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:20.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:20 vm00 bash[20726]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:20 vm03 bash[23394]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:20 vm03 bash[23394]: audit 2026-03-10T14:57:19.210909+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? 192.168.123.100:0/2459680608' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:20 vm03 bash[23394]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:20.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:20 vm03 bash[23394]: cluster 2026-03-10T14:57:19.222318+0000 mon.a (mon.0) 1202 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:21 vm00 bash[28403]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:21 vm00 bash[28403]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:21 vm00 bash[28403]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:21 vm00 bash[28403]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:21 vm00 bash[20726]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:21 vm00 bash[20726]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:21 vm00 bash[20726]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:21 vm00 bash[20726]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:21 vm03 bash[23394]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:21 vm03 bash[23394]: cluster 2026-03-10T14:57:20.285640+0000 mgr.y (mgr.24425) 214 : cluster [DBG] pgmap v338: 196 pgs: 196 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 500 B/s wr, 2 op/s 2026-03-10T14:57:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:21 vm03 bash[23394]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:21.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:21 vm03 bash[23394]: cluster 2026-03-10T14:57:20.381962+0000 mon.a (mon.0) 1203 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:22 vm00 bash[28403]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:22 vm00 bash[28403]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:22 vm00 bash[28403]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:22 vm00 bash[28403]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:22 vm00 bash[20726]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:22 vm00 bash[20726]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:22 vm00 bash[20726]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:22.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:22 vm00 bash[20726]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:22 vm03 bash[23394]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:22 vm03 bash[23394]: cluster 2026-03-10T14:57:21.409120+0000 mon.a (mon.0) 1204 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T14:57:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:22 vm03 bash[23394]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:22.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:22 vm03 bash[23394]: cluster 2026-03-10T14:57:22.425512+0000 mon.a (mon.0) 1205 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:23 vm00 bash[28403]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:23 vm00 bash[20726]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: cluster 2026-03-10T14:57:22.286024+0000 mgr.y (mgr.24425) 215 : cluster [DBG] pgmap v341: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: audit 2026-03-10T14:57:22.458520+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: audit 2026-03-10T14:57:23.408295+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? 192.168.123.100:0/4236680821' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:23.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:23 vm03 bash[23394]: cluster 2026-03-10T14:57:23.415759+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T14:57:24.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:24.428 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_operate_aio_write_op PASSED [ 65%] 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:25 vm00 bash[28403]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:25 vm00 bash[20726]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: cluster 2026-03-10T14:57:24.286302+0000 mgr.y (mgr.24425) 216 : cluster [DBG] pgmap v344: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: cluster 2026-03-10T14:57:24.425253+0000 mon.a (mon.0) 1209 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:25.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:25 vm03 bash[23394]: audit 2026-03-10T14:57:24.640240+0000 mon.a (mon.0) 1210 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:26 vm03 bash[23394]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:26 vm03 bash[23394]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:26.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:26 vm00 bash[28403]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:26.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:26 vm00 bash[28403]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:26.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:26 vm00 bash[20726]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:26.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:26 vm00 bash[20726]: cluster 2026-03-10T14:57:25.417993+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:27 vm03 bash[23394]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:27.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:27 vm00 bash[28403]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:26.286555+0000 mgr.y (mgr.24425) 217 : cluster [DBG] pgmap v347: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:26.472168+0000 mon.a (mon.0) 1212 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:26.515820+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.100:0/2187602001' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:26.520004+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: audit 2026-03-10T14:57:27.462082+0000 mon.a (mon.0) 1214 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:27.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:27 vm00 bash[20726]: cluster 2026-03-10T14:57:27.472644+0000 mon.a (mon.0) 1215 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T14:57:28.471 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write PASSED [ 67%] 2026-03-10T14:57:29.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:29 vm00 bash[28403]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:29 vm00 bash[20726]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: cluster 2026-03-10T14:57:28.286920+0000 mgr.y (mgr.24425) 218 : cluster [DBG] pgmap v350: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 512 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: cluster 2026-03-10T14:57:28.471907+0000 mon.a (mon.0) 1216 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:30.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:29 vm03 bash[23394]: audit 2026-03-10T14:57:28.726866+0000 mgr.y (mgr.24425) 219 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:30 vm03 bash[23394]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:30 vm03 bash[23394]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:31.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:30 vm00 bash[28403]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:31.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:30 vm00 bash[28403]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:31.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:30 vm00 bash[20726]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:31.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:30 vm00 bash[20726]: cluster 2026-03-10T14:57:29.730723+0000 mon.a (mon.0) 1217 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:31 vm03 bash[23394]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:31 vm00 bash[28403]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: cluster 2026-03-10T14:57:30.287356+0000 mgr.y (mgr.24425) 220 : cluster [DBG] pgmap v353: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: cluster 2026-03-10T14:57:30.782387+0000 mon.a (mon.0) 1218 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: audit 2026-03-10T14:57:30.814330+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.100:0/1856015356' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:31 vm00 bash[20726]: audit 2026-03-10T14:57:30.818513+0000 mon.a (mon.0) 1219 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:32.816 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_cmpext PASSED [ 68%] 2026-03-10T14:57:33.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:32 vm03 bash[23394]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:32 vm03 bash[23394]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:32 vm03 bash[23394]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:33.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:32 vm03 bash[23394]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:32 vm00 bash[28403]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:32 vm00 bash[28403]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:32 vm00 bash[28403]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:32 vm00 bash[28403]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:32 vm00 bash[20726]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:32 vm00 bash[20726]: audit 2026-03-10T14:57:31.786385+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:32 vm00 bash[20726]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:33.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:32 vm00 bash[20726]: cluster 2026-03-10T14:57:31.797197+0000 mon.a (mon.0) 1221 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T14:57:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:33 vm03 bash[23394]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:33 vm03 bash[23394]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:33 vm03 bash[23394]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:33 vm03 bash[23394]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:33 vm00 bash[28403]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:33 vm00 bash[28403]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:33 vm00 bash[28403]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:33 vm00 bash[28403]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:33 vm00 bash[20726]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:33 vm00 bash[20726]: cluster 2026-03-10T14:57:32.287722+0000 mgr.y (mgr.24425) 221 : cluster [DBG] pgmap v356: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:33 vm00 bash[20726]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:34.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:33 vm00 bash[20726]: cluster 2026-03-10T14:57:32.817364+0000 mon.a (mon.0) 1222 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T14:57:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:35 vm03 bash[23394]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:35 vm03 bash[23394]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:35 vm03 bash[23394]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:35.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:35 vm03 bash[23394]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:35 vm00 bash[28403]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:35 vm00 bash[28403]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:35 vm00 bash[28403]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:35 vm00 bash[28403]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:35 vm00 bash[20726]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:35 vm00 bash[20726]: cluster 2026-03-10T14:57:33.863744+0000 mon.a (mon.0) 1223 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:35 vm00 bash[20726]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:35.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:35 vm00 bash[20726]: cluster 2026-03-10T14:57:34.288004+0000 mgr.y (mgr.24425) 222 : cluster [DBG] pgmap v359: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:36 vm03 bash[23394]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:36 vm00 bash[28403]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: cluster 2026-03-10T14:57:35.066530+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: audit 2026-03-10T14:57:35.097039+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.100:0/4133016090' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:36.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:36 vm00 bash[20726]: audit 2026-03-10T14:57:35.101219+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:37.093 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_rmxattr PASSED [ 69%] 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:37.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:37 vm03 bash[23394]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:37 vm00 bash[28403]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: audit 2026-03-10T14:57:36.066763+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: cluster 2026-03-10T14:57:36.076069+0000 mon.a (mon.0) 1227 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:37.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:37 vm00 bash[20726]: cluster 2026-03-10T14:57:36.288317+0000 mgr.y (mgr.24425) 223 : cluster [DBG] pgmap v362: 196 pgs: 196 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T14:57:38.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:38 vm00 bash[28403]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:38.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:38 vm00 bash[28403]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:38.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:38 vm00 bash[20726]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:38.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:38 vm00 bash[20726]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:38 vm03 bash[23394]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:38.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:38 vm03 bash[23394]: cluster 2026-03-10T14:57:37.092892+0000 mon.a (mon.0) 1228 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T14:57:39.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:39 vm00 bash[28403]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:39.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:39 vm00 bash[20726]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: cluster 2026-03-10T14:57:38.139579+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: cluster 2026-03-10T14:57:38.288689+0000 mgr.y (mgr.24425) 224 : cluster [DBG] pgmap v365: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 513 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:39.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:39 vm03 bash[23394]: audit 2026-03-10T14:57:38.734358+0000 mgr.y (mgr.24425) 225 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:40 vm03 bash[23394]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:40 vm00 bash[28403]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: cluster 2026-03-10T14:57:39.205632+0000 mon.a (mon.0) 1230 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: audit 2026-03-10T14:57:39.253525+0000 mon.a (mon.0) 1231 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:40 vm00 bash[20726]: audit 2026-03-10T14:57:39.646767+0000 mon.a (mon.0) 1232 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:41.238 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_no_comp_ref PASSED [ 70%] 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:41 vm03 bash[23394]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:41 vm00 bash[28403]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: audit 2026-03-10T14:57:40.226058+0000 mon.a (mon.0) 1233 : audit [INF] from='client.? 192.168.123.100:0/1053185832' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: cluster 2026-03-10T14:57:40.228342+0000 mon.a (mon.0) 1234 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:41 vm00 bash[20726]: cluster 2026-03-10T14:57:40.289261+0000 mgr.y (mgr.24425) 226 : cluster [DBG] pgmap v368: 196 pgs: 196 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:42 vm03 bash[23394]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:42 vm03 bash[23394]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:42 vm00 bash[28403]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:42 vm00 bash[28403]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:42 vm00 bash[20726]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:42 vm00 bash[20726]: cluster 2026-03-10T14:57:41.239670+0000 mon.a (mon.0) 1235 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:43 vm03 bash[23394]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:43 vm00 bash[28403]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:42.275426+0000 mon.a (mon.0) 1236 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:42.289900+0000 mgr.y (mgr.24425) 227 : cluster [DBG] pgmap v371: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:43 vm00 bash[20726]: cluster 2026-03-10T14:57:43.271019+0000 mon.a (mon.0) 1237 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T14:57:44.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:44 vm03 bash[23394]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:44 vm00 bash[28403]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:43.321511+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.100:0/3453226021' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:43.325766+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: audit 2026-03-10T14:57:44.262975+0000 mon.a (mon.0) 1239 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:44.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:44 vm00 bash[20726]: cluster 2026-03-10T14:57:44.266891+0000 mon.a (mon.0) 1240 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T14:57:45.280 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_append PASSED [ 71%] 2026-03-10T14:57:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:45 vm03 bash[23394]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:45 vm03 bash[23394]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:45 vm03 bash[23394]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:45 vm03 bash[23394]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:45 vm00 bash[28403]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:45 vm00 bash[28403]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:45 vm00 bash[28403]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:45 vm00 bash[28403]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:45 vm00 bash[20726]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:45 vm00 bash[20726]: cluster 2026-03-10T14:57:44.290162+0000 mgr.y (mgr.24425) 228 : cluster [DBG] pgmap v374: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:45 vm00 bash[20726]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:45 vm00 bash[20726]: cluster 2026-03-10T14:57:45.281914+0000 mon.a (mon.0) 1241 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T14:57:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:47 vm03 bash[23394]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:47 vm03 bash[23394]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:47 vm03 bash[23394]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:47 vm03 bash[23394]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:47 vm00 bash[28403]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:47 vm00 bash[28403]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:47 vm00 bash[28403]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:47 vm00 bash[28403]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:47 vm00 bash[20726]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:47 vm00 bash[20726]: cluster 2026-03-10T14:57:46.290456+0000 mgr.y (mgr.24425) 229 : cluster [DBG] pgmap v377: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:47 vm00 bash[20726]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:47 vm00 bash[20726]: cluster 2026-03-10T14:57:46.294487+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:48 vm03 bash[23394]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:48 vm00 bash[28403]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: cluster 2026-03-10T14:57:47.300825+0000 mon.a (mon.0) 1243 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: audit 2026-03-10T14:57:47.341798+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.100:0/969361063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:48 vm00 bash[20726]: audit 2026-03-10T14:57:47.345849+0000 mon.a (mon.0) 1244 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:49.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:49.309 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_full PASSED [ 72%] 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:49 vm03 bash[23394]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:49 vm00 bash[28403]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: cluster 2026-03-10T14:57:48.290763+0000 mgr.y (mgr.24425) 230 : cluster [DBG] pgmap v379: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 514 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: audit 2026-03-10T14:57:48.295124+0000 mon.a (mon.0) 1245 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: cluster 2026-03-10T14:57:48.302208+0000 mon.a (mon.0) 1246 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:49 vm00 bash[20726]: audit 2026-03-10T14:57:48.742305+0000 mgr.y (mgr.24425) 231 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:50.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:50 vm03 bash[23394]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:50.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:50 vm03 bash[23394]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:50.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:50 vm00 bash[28403]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:50.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:50 vm00 bash[28403]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:50.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:50 vm00 bash[20726]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:50.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:50 vm00 bash[20726]: cluster 2026-03-10T14:57:49.310509+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:51.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:51 vm03 bash[23394]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:51.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:51 vm00 bash[28403]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.291351+0000 mgr.y (mgr.24425) 232 : cluster [DBG] pgmap v382: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.320085+0000 mon.a (mon.0) 1248 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:51 vm00 bash[20726]: cluster 2026-03-10T14:57:50.348270+0000 mon.a (mon.0) 1249 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:52 vm00 bash[28403]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:52 vm00 bash[20726]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: cluster 2026-03-10T14:57:51.338403+0000 mon.a (mon.0) 1250 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:51.395090+0000 mon.c (mon.2) 44 : audit [INF] from='client.? 192.168.123.100:0/1378641670' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:51.395421+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: audit 2026-03-10T14:57:52.337658+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:52 vm03 bash[23394]: cluster 2026-03-10T14:57:52.340341+0000 mon.a (mon.0) 1253 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T14:57:53.369 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_writesame PASSED [ 73%] 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:53 vm00 bash[28403]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:53 vm00 bash[28403]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:53 vm00 bash[28403]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:53 vm00 bash[28403]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:53 vm00 bash[20726]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:53 vm00 bash[20726]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:53 vm00 bash[20726]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:53.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:53 vm00 bash[20726]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:53 vm03 bash[23394]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:53 vm03 bash[23394]: cluster 2026-03-10T14:57:52.291639+0000 mgr.y (mgr.24425) 233 : cluster [DBG] pgmap v385: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:53 vm03 bash[23394]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:53 vm03 bash[23394]: cluster 2026-03-10T14:57:53.361513+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T14:57:54.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:57:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:57:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:55 vm00 bash[28403]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:55.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:55 vm00 bash[20726]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: cluster 2026-03-10T14:57:54.291955+0000 mgr.y (mgr.24425) 234 : cluster [DBG] pgmap v388: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: cluster 2026-03-10T14:57:54.415269+0000 mon.a (mon.0) 1255 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:55.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:55 vm03 bash[23394]: audit 2026-03-10T14:57:54.652731+0000 mon.a (mon.0) 1256 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:56.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:56 vm00 bash[28403]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:56.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:56 vm00 bash[20726]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: cluster 2026-03-10T14:57:55.408110+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: audit 2026-03-10T14:57:55.460265+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: audit 2026-03-10T14:57:56.400092+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? 192.168.123.100:0/3221020438' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:56.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:56 vm03 bash[23394]: cluster 2026-03-10T14:57:56.405447+0000 mon.a (mon.0) 1260 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T14:57:57.460 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_stat PASSED [ 74%] 2026-03-10T14:57:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:57 vm03 bash[23394]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:57 vm03 bash[23394]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:57 vm03 bash[23394]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:57 vm03 bash[23394]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:57 vm00 bash[28403]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:57 vm00 bash[28403]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:57 vm00 bash[28403]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:57 vm00 bash[28403]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:57 vm00 bash[20726]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:57 vm00 bash[20726]: cluster 2026-03-10T14:57:56.292349+0000 mgr.y (mgr.24425) 235 : cluster [DBG] pgmap v391: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:57.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:57 vm00 bash[20726]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:57 vm00 bash[20726]: cluster 2026-03-10T14:57:56.423747+0000 mon.a (mon.0) 1261 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:57:58.748 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:58 vm03 bash[23394]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:58.748 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:58 vm03 bash[23394]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:58.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:58 vm00 bash[28403]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:58.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:58 vm00 bash[28403]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:58 vm00 bash[20726]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:58 vm00 bash[20726]: cluster 2026-03-10T14:57:57.459797+0000 mon.a (mon.0) 1262 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T14:57:59.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:57:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:57:59 vm03 bash[23394]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:57:59 vm00 bash[28403]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: cluster 2026-03-10T14:57:58.292661+0000 mgr.y (mgr.24425) 236 : cluster [DBG] pgmap v394: 164 pgs: 164 active+clean; 455 KiB data, 515 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: cluster 2026-03-10T14:57:58.500356+0000 mon.a (mon.0) 1263 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:57:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:57:59 vm00 bash[20726]: audit 2026-03-10T14:57:58.751159+0000 mgr.y (mgr.24425) 237 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:00 vm03 bash[23394]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:00 vm00 bash[28403]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: cluster 2026-03-10T14:57:59.492093+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:57:59.553003+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.100:0/1745283191' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:57:59.553429+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: audit 2026-03-10T14:58:00.492338+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:00 vm00 bash[20726]: cluster 2026-03-10T14:58:00.496256+0000 mon.a (mon.0) 1267 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T14:58:01.502 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_remove PASSED [ 75%] 2026-03-10T14:58:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:01 vm03 bash[23394]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:01 vm03 bash[23394]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:01 vm03 bash[23394]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:01 vm03 bash[23394]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:01 vm00 bash[28403]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:01 vm00 bash[28403]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:01 vm00 bash[28403]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:01 vm00 bash[28403]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:01 vm00 bash[20726]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:01 vm00 bash[20726]: cluster 2026-03-10T14:58:00.293090+0000 mgr.y (mgr.24425) 238 : cluster [DBG] pgmap v397: 196 pgs: 196 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:01 vm00 bash[20726]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:01 vm00 bash[20726]: cluster 2026-03-10T14:58:01.500474+0000 mon.a (mon.0) 1268 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T14:58:02.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:02 vm03 bash[23394]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:02 vm03 bash[23394]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:02 vm03 bash[23394]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:02.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:02 vm03 bash[23394]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:02 vm00 bash[28403]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:02 vm00 bash[28403]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:02 vm00 bash[28403]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:02 vm00 bash[28403]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:02 vm00 bash[20726]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:02 vm00 bash[20726]: cluster 2026-03-10T14:58:02.521105+0000 mon.a (mon.0) 1269 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:02.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:02 vm00 bash[20726]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:02.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:02 vm00 bash[20726]: cluster 2026-03-10T14:58:02.591936+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:03 vm03 bash[23394]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:03 vm00 bash[28403]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: cluster 2026-03-10T14:58:02.293337+0000 mgr.y (mgr.24425) 239 : cluster [DBG] pgmap v400: 164 pgs: 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: cluster 2026-03-10T14:58:03.580657+0000 mon.a (mon.0) 1271 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.605626+0000 mon.b (mon.1) 49 : audit [DBG] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.606056+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:03 vm00 bash[20726]: audit 2026-03-10T14:58:03.610130+0000 mon.a (mon.0) 1272 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:04 vm03 bash[23394]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:04 vm00 bash[28403]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: debug 2026-03-10T14:58:04.590+0000 7f1f6421f640 -1 mon.a@0(leader).osd e300 definitely_dead 0 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: cluster 2026-03-10T14:58:04.576336+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.586228+0000 mon.a (mon.0) 1274 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: cluster 2026-03-10T14:58:04.589075+0000 mon.a (mon.0) 1275 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.590947+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:04.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:04 vm00 bash[20726]: audit 2026-03-10T14:58:04.595114+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:05.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:05 vm00 bash[28403]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:05 vm00 bash[20726]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:04.293649+0000 mgr.y (mgr.24425) 240 : cluster [DBG] pgmap v403: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:05.586961+0000 mon.a (mon.0) 1277 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: audit 2026-03-10T14:58:05.590140+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:06.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:05 vm03 bash[23394]: cluster 2026-03-10T14:58:05.606547+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e301: 8 total, 5 up, 8 in 2026-03-10T14:58:06.964 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:58:06 vm00 bash[31304]: debug 2026-03-10T14:58:06.670+0000 7f51b5333640 -1 osd.0 302 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:06.964 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:58:06 vm00 bash[31304]: debug 2026-03-10T14:58:06.674+0000 7f51c1f4a640 -1 osd.0 302 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:07.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:07 vm03 bash[44271]: debug 2026-03-10T14:58:07.593+0000 7fd610323640 -1 osd.7 302 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:07.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:07 vm03 bash[44271]: debug 2026-03-10T14:58:07.653+0000 7fd602ef9640 -1 osd.7 303 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:07 vm03 bash[23394]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:07 vm00 bash[28403]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.294029+0000 mgr.y (mgr.24425) 241 : cluster [DBG] pgmap v406: 196 pgs: 67 stale+active+clean, 129 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.612156+0000 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.612159+0000 osd.0 (osd.0) 4 : cluster [DBG] map e301 wrongly marked me down at e301 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.652792+0000 mon.a (mon.0) 1280 : cluster [DBG] osdmap e302: 8 total, 5 up, 8 in 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:06.664445+0000 mon.a (mon.0) 1281 : cluster [INF] osd.0 marked itself dead as of e301 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:07 vm00 bash[20726]: cluster 2026-03-10T14:58:07.446746+0000 mon.a (mon.0) 1282 : cluster [INF] osd.7 marked itself dead as of e302 2026-03-10T14:58:07.965 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:58:07 vm00 bash[31304]: debug 2026-03-10T14:58:07.646+0000 7f51b5333640 -1 osd.0 303 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:08.757 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:58:08 vm03 bash[26650]: debug 2026-03-10T14:58:08.481+0000 7f7713a65640 -1 osd.4 303 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:08.757 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 14:58:08 vm03 bash[26650]: debug 2026-03-10T14:58:08.689+0000 7f770e87b640 -1 osd.4 304 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:08.757 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:08 vm03 bash[44271]: debug 2026-03-10T14:58:08.685+0000 7fd60b139640 -1 osd.7 304 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.757 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.758 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.758 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.758 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:08 vm03 bash[23394]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:08 vm00 bash[28403]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.442367+0000 osd.7 (osd.7) 3 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.442370+0000 osd.7 (osd.7) 4 : cluster [DBG] map e302 wrongly marked me down at e301 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.652551+0000 mon.a (mon.0) 1283 : cluster [DBG] osdmap e303: 8 total, 5 up, 8 in 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.977048+0000 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.977050+0000 osd.4 (osd.4) 4 : cluster [DBG] map e303 wrongly marked me down at e301 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:07.981684+0000 mon.a (mon.0) 1284 : cluster [INF] osd.4 marked itself dead as of e303 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.224245+0000 mon.a (mon.0) 1285 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: cluster 2026-03-10T14:58:08.463507+0000 mon.a (mon.0) 1286 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.564756+0000 mon.a (mon.0) 1287 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.565324+0000 mon.a (mon.0) 1288 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.570459+0000 mon.a (mon.0) 1289 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.601486+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:08 vm00 bash[20726]: audit 2026-03-10T14:58:08.605780+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:08.965 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:58:08 vm00 bash[31304]: debug 2026-03-10T14:58:08.690+0000 7f51bd573640 -1 osd.0 304 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:09.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:09 vm00 bash[28403]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:09 vm00 bash[20726]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.294372+0000 mgr.y (mgr.24425) 242 : cluster [DBG] pgmap v409: 196 pgs: 78 stale+active+clean, 118 active+clean; 455 KiB data, 516 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.670450+0000 mon.a (mon.0) 1291 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: audit 2026-03-10T14:58:08.683740+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: cluster 2026-03-10T14:58:08.705701+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e304: 8 total, 5 up, 8 in 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:10.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:09 vm03 bash[23394]: audit 2026-03-10T14:58:08.760013+0000 mgr.y (mgr.24425) 243 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:10 vm00 bash[28403]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:11.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:10 vm00 bash[20726]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:09.684185+0000 mon.a (mon.0) 1294 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:09.938051+0000 mon.a (mon.0) 1295 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:09.942668+0000 mon.a (mon.0) 1296 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:11.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028494+0000 mon.a (mon.0) 1297 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028680+0000 mon.a (mon.0) 1298 : cluster [INF] osd.4 v2:192.168.123.103:6800/4249951776 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028792+0000 mon.a (mon.0) 1299 : cluster [INF] osd.0 v2:192.168.123.100:6801/1492812989 boot 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.028902+0000 mon.a (mon.0) 1300 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030127+0000 mon.a (mon.0) 1301 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030376+0000 mon.a (mon.0) 1302 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: audit 2026-03-10T14:58:10.030555+0000 mon.a (mon.0) 1303 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:11.376 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:10 vm03 bash[23394]: cluster 2026-03-10T14:58:10.294895+0000 mgr.y (mgr.24425) 244 : cluster [DBG] pgmap v412: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:11 vm03 bash[23394]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:11 vm00 bash[28403]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: cluster 2026-03-10T14:58:10.951308+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: cluster 2026-03-10T14:58:10.984328+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: audit 2026-03-10T14:58:11.517851+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.100:0/3274062284' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:11 vm00 bash[20726]: audit 2026-03-10T14:58:11.522049+0000 mon.a (mon.0) 1306 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:12.974 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete PASSED [ 76%] 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:13.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:12 vm03 bash[23394]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:13 vm00 bash[28403]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: audit 2026-03-10T14:58:11.963288+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: cluster 2026-03-10T14:58:11.968009+0000 mon.a (mon.0) 1308 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:13.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:13 vm00 bash[20726]: cluster 2026-03-10T14:58:12.295287+0000 mgr.y (mgr.24425) 245 : cluster [DBG] pgmap v415: 196 pgs: 165 peering, 31 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:14.026 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:14 vm03 bash[23394]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:14.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:14 vm03 bash[23394]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:14.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:14 vm00 bash[28403]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:14.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:14 vm00 bash[28403]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:14.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:14 vm00 bash[20726]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:14.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:14 vm00 bash[20726]: cluster 2026-03-10T14:58:12.971077+0000 mon.a (mon.0) 1309 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:15.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:15 vm03 bash[23394]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:15 vm00 bash[28403]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:14.021372+0000 mon.a (mon.0) 1310 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:14.295584+0000 mgr.y (mgr.24425) 246 : cluster [DBG] pgmap v418: 196 pgs: 32 unknown, 139 peering, 25 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:15.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:15 vm00 bash[20726]: cluster 2026-03-10T14:58:15.009221+0000 mon.a (mon.0) 1311 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:16 vm03 bash[23394]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:16 vm00 bash[28403]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: debug 2026-03-10T14:58:16.022+0000 7f1f6421f640 -1 mon.a@0(leader).osd e311 definitely_dead 0 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.047804+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.048069+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:15.048504+0000 mon.a (mon.0) 1312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: cluster 2026-03-10T14:58:16.005652+0000 mon.a (mon.0) 1313 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.017015+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.021341+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: cluster 2026-03-10T14:58:16.027799+0000 mon.a (mon.0) 1315 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:16.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:16 vm00 bash[20726]: audit 2026-03-10T14:58:16.028753+0000 mon.a (mon.0) 1316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:17 vm03 bash[23394]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:17.375 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:58:17 vm03 bash[32416]: debug 2026-03-10T14:58:17.249+0000 7f3f52277640 -1 osd.5 312 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:17 vm00 bash[28403]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:16.295821+0000 mgr.y (mgr.24425) 247 : cluster [DBG] pgmap v421: 196 pgs: 196 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:17.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:17.019322+0000 mon.a (mon.0) 1317 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: audit 2026-03-10T14:58:17.022185+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-10T14:58:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:17.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:17 vm00 bash[20726]: cluster 2026-03-10T14:58:17.035951+0000 mon.a (mon.0) 1319 : cluster [DBG] osdmap e312: 8 total, 5 up, 8 in 2026-03-10T14:58:18.030 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:17 vm03 bash[44271]: debug 2026-03-10T14:58:17.741+0000 7fd60fb10640 -1 osd.7 312 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:18.375 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:58:18 vm03 bash[32416]: debug 2026-03-10T14:58:18.021+0000 7f3f44e4d640 -1 osd.5 313 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:18.375 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:18 vm03 bash[44271]: debug 2026-03-10T14:58:18.029+0000 7fd602ef9640 -1 osd.7 313 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:18.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:18 vm03 bash[23394]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:18 vm00 bash[28403]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.044901+0000 mon.a (mon.0) 1320 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.075183+0000 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.075185+0000 osd.5 (osd.5) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.082052+0000 mon.a (mon.0) 1321 : cluster [INF] osd.5 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.478413+0000 osd.7 (osd.7) 5 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.478416+0000 osd.7 (osd.7) 6 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.482757+0000 mon.a (mon.0) 1322 : cluster [INF] osd.7 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973284+0000 mon.a (mon.0) 1323 : cluster [INF] osd.2 marked itself dead as of e312 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:18 vm00 bash[20726]: cluster 2026-03-10T14:58:18.028749+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e313: 8 total, 5 up, 8 in 2026-03-10T14:58:19.072 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:19.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:19 vm00 bash[28403]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:19.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:19 vm00 bash[20726]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:19.215 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:58:18 vm00 bash[43300]: debug 2026-03-10T14:58:18.898+0000 7ff5f8d3b640 -1 osd.2 313 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973016+0000 osd.2 (osd.2) 3 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:17.973020+0000 osd.2 (osd.2) 4 : cluster [DBG] map e312 wrongly marked me down at e312 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: cluster 2026-03-10T14:58:18.296175+0000 mgr.y (mgr.24425) 248 : cluster [DBG] pgmap v424: 196 pgs: 77 stale+active+clean, 119 active+clean; 455 KiB data, 517 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:19.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:19 vm03 bash[23394]: audit 2026-03-10T14:58:18.770649+0000 mgr.y (mgr.24425) 249 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:20.375 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 14:58:20 vm03 bash[32416]: debug 2026-03-10T14:58:20.081+0000 7f3f4d08d640 -1 osd.5 314 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:20.375 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:20 vm03 bash[44271]: debug 2026-03-10T14:58:20.085+0000 7fd60b139640 -1 osd.7 314 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:20 vm03 bash[23394]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:20 vm03 bash[23394]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:20 vm03 bash[23394]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:20 vm03 bash[23394]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:20 vm00 bash[28403]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:20 vm00 bash[28403]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:20 vm00 bash[28403]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:20 vm00 bash[28403]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:20 vm00 bash[20726]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:20 vm00 bash[20726]: audit 2026-03-10T14:58:20.048815+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:20 vm00 bash[20726]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:20 vm00 bash[20726]: audit 2026-03-10T14:58:20.049050+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:20.464 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:58:20 vm00 bash[43300]: debug 2026-03-10T14:58:20.082+0000 7ff5f3b51640 -1 osd.2 314 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:21.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:21 vm00 bash[28403]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:21.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:21 vm00 bash[20726]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.075219+0000 mon.a (mon.0) 1326 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: audit 2026-03-10T14:58:20.084702+0000 mon.a (mon.0) 1327 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.094036+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e314: 8 total, 5 up, 8 in 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:21 vm03 bash[23394]: cluster 2026-03-10T14:58:20.296851+0000 mgr.y (mgr.24425) 250 : cluster [DBG] pgmap v426: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:22.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:22 vm00 bash[28403]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:22.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:22 vm00 bash[20726]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:22.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085506+0000 mon.a (mon.0) 1329 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085586+0000 mon.a (mon.0) 1330 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.085595+0000 mon.a (mon.0) 1331 : cluster [WRN] Health check failed: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.165570+0000 mon.a (mon.0) 1332 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169841+0000 mon.a (mon.0) 1333 : cluster [INF] osd.5 v2:192.168.123.103:6804/413751251 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169864+0000 mon.a (mon.0) 1334 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: cluster 2026-03-10T14:58:21.169879+0000 mon.a (mon.0) 1335 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170207+0000 mon.a (mon.0) 1336 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170401+0000 mon.a (mon.0) 1337 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:22.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:22 vm03 bash[23394]: audit 2026-03-10T14:58:21.170579+0000 mon.a (mon.0) 1338 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:23 vm03 bash[23394]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:23 vm00 bash[28403]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: cluster 2026-03-10T14:58:22.218072+0000 mon.a (mon.0) 1339 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: cluster 2026-03-10T14:58:22.297283+0000 mgr.y (mgr.24425) 251 : cluster [DBG] pgmap v429: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: audit 2026-03-10T14:58:22.991129+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.100:0/4126501519' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:23.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:23 vm00 bash[20726]: audit 2026-03-10T14:58:22.991371+0000 mon.a (mon.0) 1340 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:24.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:24.409 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb PASSED [ 78%] 2026-03-10T14:58:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:24 vm00 bash[28403]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:24 vm00 bash[28403]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:24 vm00 bash[28403]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:24.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:24 vm00 bash[28403]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:24.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:24 vm00 bash[20726]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:24 vm00 bash[20726]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:24 vm00 bash[20726]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:24.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:24 vm00 bash[20726]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:24 vm03 bash[23394]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:24 vm03 bash[23394]: audit 2026-03-10T14:58:23.290925+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:24 vm03 bash[23394]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:24.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:24 vm03 bash[23394]: cluster 2026-03-10T14:58:23.298830+0000 mon.a (mon.0) 1342 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:25 vm00 bash[28403]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:25.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:25 vm00 bash[20726]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: cluster 2026-03-10T14:58:24.297710+0000 mgr.y (mgr.24425) 252 : cluster [DBG] pgmap v431: 196 pgs: 33 undersized+peered, 83 active+undersized, 9 stale+active+clean, 8 undersized+degraded+peered, 31 active+undersized+degraded, 32 active+clean; 455 KiB data, 518 MiB used, 159 GiB / 160 GiB avail; 195/600 objects degraded (32.500%) 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: cluster 2026-03-10T14:58:24.409651+0000 mon.a (mon.0) 1343 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: audit 2026-03-10T14:58:25.053572+0000 mon.a (mon.0) 1344 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:25.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:25 vm03 bash[23394]: audit 2026-03-10T14:58:25.054521+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:26 vm00 bash[28403]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:26 vm00 bash[20726]: debug 2026-03-10T14:58:26.446+0000 7f1f6421f640 -1 mon.a@0(leader).osd e320 definitely_dead 0 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: cluster 2026-03-10T14:58:25.422763+0000 mon.a (mon.0) 1346 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.433415+0000 mon.b (mon.1) 54 : audit [DBG] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.434069+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:26.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:26 vm03 bash[23394]: audit 2026-03-10T14:58:25.441217+0000 mon.a (mon.0) 1347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:27 vm00 bash[28403]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:27 vm00 bash[20726]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.298058+0000 mgr.y (mgr.24425) 253 : cluster [DBG] pgmap v434: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428617+0000 mon.a (mon.0) 1348 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428672+0000 mon.a (mon.0) 1349 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs inactive) 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.428677+0000 mon.a (mon.0) 1350 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 195/600 objects degraded (32.500%), 39 pgs degraded) 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.440954+0000 mon.a (mon.0) 1351 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.444706+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: cluster 2026-03-10T14:58:26.452827+0000 mon.a (mon.0) 1352 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:27.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:27 vm03 bash[23394]: audit 2026-03-10T14:58:26.453717+0000 mon.a (mon.0) 1353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:28.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:28 vm00 bash[28403]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:28.715 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:58:28 vm00 bash[37070]: debug 2026-03-10T14:58:28.598+0000 7fe4a4826640 -1 osd.1 322 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:28.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:28 vm00 bash[20726]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:28.777 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.778 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: cluster 2026-03-10T14:58:27.453388+0000 mon.a (mon.0) 1354 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN) 2026-03-10T14:58:28.778 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.778 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: audit 2026-03-10T14:58:27.465310+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-10T14:58:28.778 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:28.778 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:28 vm03 bash[23394]: cluster 2026-03-10T14:58:27.467821+0000 mon.a (mon.0) 1356 : cluster [DBG] osdmap e321: 8 total, 5 up, 8 in 2026-03-10T14:58:29.125 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:28 vm03 bash[44271]: debug 2026-03-10T14:58:28.789+0000 7fd610323640 -1 osd.7 322 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:29.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:29.214 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:58:28 vm00 bash[43300]: debug 2026-03-10T14:58:28.854+0000 7ff5f8528640 -1 osd.2 322 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:29.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:29 vm03 bash[44271]: debug 2026-03-10T14:58:29.477+0000 7fd602ef9640 -1 osd.7 323 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.876 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:29 vm03 bash[23394]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.964 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:58:29 vm00 bash[37070]: debug 2026-03-10T14:58:29.478+0000 7fe498410640 -1 osd.1 323 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:29 vm00 bash[28403]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.298400+0000 mgr.y (mgr.24425) 254 : cluster [DBG] pgmap v437: 196 pgs: 60 stale+active+clean, 32 unknown, 104 active+clean; 455 KiB data, 522 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.482168+0000 mon.a (mon.0) 1357 : cluster [DBG] osdmap e322: 8 total, 5 up, 8 in 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.483726+0000 osd.2 (osd.2) 5 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.483728+0000 osd.2 (osd.2) 6 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486342+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486345+0000 osd.1 (osd.1) 4 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486741+0000 osd.7 (osd.7) 7 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486743+0000 osd.7 (osd.7) 8 : cluster [DBG] map e322 wrongly marked me down at e321 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486765+0000 mon.a (mon.0) 1358 : cluster [INF] osd.2 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.486823+0000 mon.a (mon.0) 1359 : cluster [INF] osd.1 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: cluster 2026-03-10T14:58:28.501934+0000 mon.a (mon.0) 1360 : cluster [INF] osd.7 marked itself dead as of e322 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:29 vm00 bash[20726]: audit 2026-03-10T14:58:28.780614+0000 mgr.y (mgr.24425) 255 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:29.965 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:58:29 vm00 bash[43300]: debug 2026-03-10T14:58:29.486+0000 7ff5eb911640 -1 osd.2 323 osdmap NOUP flag is set, waiting for it to clear 2026-03-10T14:58:30.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 14:58:30 vm03 bash[44271]: debug 2026-03-10T14:58:30.489+0000 7fd60b139640 -1 osd.7 324 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:30 vm03 bash[23394]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:58:30 vm00 bash[37070]: debug 2026-03-10T14:58:30.502+0000 7fe4a0650640 -1 osd.1 324 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:30 vm00 bash[28403]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: cluster 2026-03-10T14:58:29.501183+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e323: 8 total, 5 up, 8 in 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: audit 2026-03-10T14:58:30.467321+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:30 vm00 bash[20726]: audit 2026-03-10T14:58:30.471588+0000 mon.a (mon.0) 1362 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:30.964 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 14:58:30 vm00 bash[43300]: debug 2026-03-10T14:58:30.486+0000 7ff5f3b51640 -1 osd.2 324 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:31 vm03 bash[23394]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:31 vm00 bash[28403]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.299253+0000 mgr.y (mgr.24425) 256 : cluster [DBG] pgmap v440: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.482547+0000 mon.a (mon.0) 1363 : cluster [WRN] Health check failed: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded (PG_DEGRADED) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.483478+0000 mon.a (mon.0) 1364 : cluster [INF] Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: audit 2026-03-10T14:58:30.489064+0000 mon.a (mon.0) 1365 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:31 vm00 bash[20726]: cluster 2026-03-10T14:58:30.508949+0000 mon.a (mon.0) 1366 : cluster [DBG] osdmap e324: 8 total, 5 up, 8 in 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:32.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:32 vm03 bash[23394]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:32 vm00 bash[28403]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.489539+0000 mon.a (mon.0) 1367 : cluster [INF] Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516350+0000 mon.a (mon.0) 1368 : cluster [INF] osd.7 v2:192.168.123.103:6812/1578983727 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516488+0000 mon.a (mon.0) 1369 : cluster [INF] osd.1 v2:192.168.123.100:6805/198852601 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516571+0000 mon.a (mon.0) 1370 : cluster [INF] osd.2 v2:192.168.123.100:6809/4087124508 boot 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:31.516696+0000 mon.a (mon.0) 1371 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.522150+0000 mon.a (mon.0) 1372 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.522826+0000 mon.a (mon.0) 1373 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: audit 2026-03-10T14:58:31.523354+0000 mon.a (mon.0) 1374 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:32.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:32 vm00 bash[20726]: cluster 2026-03-10T14:58:32.518056+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T14:58:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:33 vm03 bash[23394]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:33.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:33 vm03 bash[23394]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:33 vm00 bash[28403]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:33 vm00 bash[28403]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:33.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:33 vm00 bash[20726]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:33 vm00 bash[20726]: cluster 2026-03-10T14:58:32.299608+0000 mgr.y (mgr.24425) 257 : cluster [DBG] pgmap v443: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:34 vm03 bash[23394]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:34 vm03 bash[23394]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:34 vm03 bash[23394]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:34 vm03 bash[23394]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:34 vm00 bash[28403]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:34 vm00 bash[28403]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:34 vm00 bash[28403]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:34 vm00 bash[28403]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:34 vm00 bash[20726]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:34 vm00 bash[20726]: audit 2026-03-10T14:58:34.422938+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.100:0/2375185877' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:34 vm00 bash[20726]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:34.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:34 vm00 bash[20726]: audit 2026-03-10T14:58:34.427091+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:35.641 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb_error PASSED [ 79%] 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:35 vm00 bash[28403]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:35.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:35.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:35 vm00 bash[20726]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: cluster 2026-03-10T14:58:34.299945+0000 mgr.y (mgr.24425) 258 : cluster [DBG] pgmap v445: 196 pgs: 4 undersized+degraded+peered+wait, 6 active+undersized+degraded+wait, 62 active+undersized, 33 undersized+peered, 1 stale+active+clean, 3 unknown, 7 undersized+degraded+peered, 7 undersized+peered+wait, 19 active+undersized+wait, 22 active+undersized+degraded, 32 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 207/597 objects degraded (34.673%) 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: audit 2026-03-10T14:58:34.561757+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:35 vm03 bash[23394]: cluster 2026-03-10T14:58:34.568133+0000 mon.a (mon.0) 1378 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T14:58:36.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:36 vm00 bash[28403]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:36.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:36 vm00 bash[28403]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:36.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:36 vm00 bash[20726]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:36.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:36 vm00 bash[20726]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:36 vm03 bash[23394]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:36 vm03 bash[23394]: cluster 2026-03-10T14:58:35.639294+0000 mon.a (mon.0) 1379 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:37 vm03 bash[23394]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:37 vm00 bash[28403]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.300188+0000 mgr.y (mgr.24425) 259 : cluster [DBG] pgmap v448: 164 pgs: 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.691109+0000 mon.a (mon.0) 1380 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.691129+0000 mon.a (mon.0) 1381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 207/597 objects degraded (34.673%), 39 pgs degraded) 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:38.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:37 vm00 bash[20726]: cluster 2026-03-10T14:58:36.718299+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T14:58:39.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:38 vm03 bash[23394]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:38 vm00 bash[28403]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: cluster 2026-03-10T14:58:37.759789+0000 mon.a (mon.0) 1383 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:37.981119+0000 mon.c (mon.2) 51 : audit [INF] from='client.? 192.168.123.100:0/545929423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:37.981375+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: audit 2026-03-10T14:58:38.742279+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:38 vm00 bash[20726]: cluster 2026-03-10T14:58:38.746202+0000 mon.a (mon.0) 1386 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T14:58:39.753 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lock PASSED [ 80%] 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:40.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:39 vm03 bash[23394]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:39 vm00 bash[28403]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:40.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: cluster 2026-03-10T14:58:38.300459+0000 mgr.y (mgr.24425) 260 : cluster [DBG] pgmap v451: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 531 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T14:58:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: audit 2026-03-10T14:58:38.790947+0000 mgr.y (mgr.24425) 261 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:40.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:39 vm00 bash[20726]: cluster 2026-03-10T14:58:39.749189+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:41.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:41 vm03 bash[23394]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:41 vm00 bash[28403]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: audit 2026-03-10T14:58:40.087414+0000 mon.a (mon.0) 1388 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: audit 2026-03-10T14:58:40.088969+0000 mon.a (mon.0) 1389 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: cluster 2026-03-10T14:58:40.301011+0000 mgr.y (mgr.24425) 262 : cluster [DBG] pgmap v454: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:41.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:41 vm00 bash[20726]: cluster 2026-03-10T14:58:40.794685+0000 mon.a (mon.0) 1390 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:42 vm00 bash[28403]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:42 vm00 bash[20726]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: cluster 2026-03-10T14:58:41.802939+0000 mon.a (mon.0) 1391 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: audit 2026-03-10T14:58:41.836312+0000 mon.c (mon.2) 52 : audit [INF] from='client.? 192.168.123.100:0/135403117' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:42 vm03 bash[23394]: audit 2026-03-10T14:58:41.836525+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:43.935 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute PASSED [ 81%] 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:43 vm00 bash[20726]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:44.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:43 vm00 bash[28403]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:44.215 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.301296+0000 mgr.y (mgr.24425) 263 : cluster [DBG] pgmap v457: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.792530+0000 mon.a (mon.0) 1393 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: audit 2026-03-10T14:58:42.928500+0000 mon.a (mon.0) 1394 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:44.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:43 vm03 bash[23394]: cluster 2026-03-10T14:58:42.940198+0000 mon.a (mon.0) 1395 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T14:58:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:45 vm03 bash[23394]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:45 vm03 bash[23394]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:45 vm03 bash[23394]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:45.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:45 vm03 bash[23394]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:45 vm00 bash[20726]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:45 vm00 bash[20726]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:45 vm00 bash[20726]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:45 vm00 bash[20726]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:45 vm00 bash[28403]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:45 vm00 bash[28403]: cluster 2026-03-10T14:58:43.936855+0000 mon.a (mon.0) 1396 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:45 vm00 bash[28403]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:45.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:45 vm00 bash[28403]: cluster 2026-03-10T14:58:44.301593+0000 mgr.y (mgr.24425) 264 : cluster [DBG] pgmap v460: 164 pgs: 164 active+clean; 455 KiB data, 540 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:46.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:46 vm03 bash[23394]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:46.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:46 vm03 bash[23394]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:46.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:46 vm00 bash[28403]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:46.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:46 vm00 bash[28403]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:46.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:46 vm00 bash[20726]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:46.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:46 vm00 bash[20726]: cluster 2026-03-10T14:58:45.018855+0000 mon.a (mon.0) 1397 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:47.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:47 vm03 bash[23394]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:47.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:47 vm00 bash[20726]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: cluster 2026-03-10T14:58:46.016915+0000 mon.a (mon.0) 1398 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: audit 2026-03-10T14:58:46.066685+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.100:0/226877426' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: audit 2026-03-10T14:58:46.070937+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:47.465 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:47 vm00 bash[28403]: cluster 2026-03-10T14:58:46.301911+0000 mgr.y (mgr.24425) 265 : cluster [DBG] pgmap v463: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:48.325 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_execute PASSED [ 82%] 2026-03-10T14:58:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:48 vm03 bash[23394]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:48 vm03 bash[23394]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:48 vm03 bash[23394]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:48.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:48 vm03 bash[23394]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:48 vm00 bash[20726]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:48 vm00 bash[20726]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:48 vm00 bash[20726]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:48 vm00 bash[20726]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:48 vm00 bash[28403]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:48 vm00 bash[28403]: audit 2026-03-10T14:58:47.044863+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:48.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:48 vm00 bash[28403]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:48.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:48 vm00 bash[28403]: cluster 2026-03-10T14:58:47.052196+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T14:58:49.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:49.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:49 vm03 bash[23394]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:49 vm00 bash[28403]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: cluster 2026-03-10T14:58:48.302229+0000 mgr.y (mgr.24425) 266 : cluster [DBG] pgmap v465: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 544 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: cluster 2026-03-10T14:58:48.327269+0000 mon.a (mon.0) 1402 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:49.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:49 vm00 bash[20726]: audit 2026-03-10T14:58:48.799084+0000 mgr.y (mgr.24425) 267 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:58:50.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:50 vm00 bash[20726]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:50.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:50 vm00 bash[20726]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:50.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:50 vm00 bash[28403]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:50.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:50 vm00 bash[28403]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:50.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:50 vm03 bash[23394]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:50.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:50 vm03 bash[23394]: cluster 2026-03-10T14:58:49.368625+0000 mon.a (mon.0) 1403 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:51 vm00 bash[20726]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:51 vm00 bash[28403]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: cluster 2026-03-10T14:58:50.302916+0000 mgr.y (mgr.24425) 268 : cluster [DBG] pgmap v468: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: cluster 2026-03-10T14:58:50.380707+0000 mon.a (mon.0) 1404 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: audit 2026-03-10T14:58:50.417114+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.100:0/3204291885' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:51 vm03 bash[23394]: audit 2026-03-10T14:58:50.421282+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:58:52.431 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_setxattr PASSED [ 83%] 2026-03-10T14:58:52.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:52 vm00 bash[20726]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:52 vm00 bash[20726]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:52 vm00 bash[20726]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:52 vm00 bash[20726]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:52 vm00 bash[28403]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:52 vm00 bash[28403]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:52 vm00 bash[28403]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:52.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:52 vm00 bash[28403]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:52 vm03 bash[23394]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:52 vm03 bash[23394]: audit 2026-03-10T14:58:51.407647+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:58:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:52 vm03 bash[23394]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:52.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:52 vm03 bash[23394]: cluster 2026-03-10T14:58:51.409634+0000 mon.a (mon.0) 1407 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T14:58:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:53 vm00 bash[20726]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:53 vm00 bash[20726]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:53 vm00 bash[20726]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:53 vm00 bash[20726]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:53 vm00 bash[28403]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:53 vm00 bash[28403]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:53 vm00 bash[28403]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:53.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:53 vm00 bash[28403]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:53 vm03 bash[23394]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:53 vm03 bash[23394]: cluster 2026-03-10T14:58:52.303296+0000 mgr.y (mgr.24425) 269 : cluster [DBG] pgmap v471: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:53 vm03 bash[23394]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:53.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:53 vm03 bash[23394]: cluster 2026-03-10T14:58:52.427214+0000 mon.a (mon.0) 1408 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T14:58:54.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:58:53 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:58:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:54 vm03 bash[23394]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:54 vm00 bash[28403]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: cluster 2026-03-10T14:58:53.478929+0000 mon.a (mon.0) 1409 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.491653+0000 mon.b (mon.1) 61 : audit [DBG] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.493312+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:55.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:54 vm00 bash[20726]: audit 2026-03-10T14:58:53.497445+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:55 vm03 bash[23394]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:55 vm00 bash[20726]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:54.303636+0000 mgr.y (mgr.24425) 270 : cluster [DBG] pgmap v474: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.475528+0000 mon.a (mon.0) 1411 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.484517+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.485060+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:54.489063+0000 mon.a (mon.0) 1412 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:54.500128+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.094798+0000 mon.a (mon.0) 1414 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.478907+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.480088+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.480427+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: cluster 2026-03-10T14:58:55.481857+0000 mon.a (mon.0) 1416 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:56.215 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:55 vm00 bash[28403]: audit 2026-03-10T14:58:55.486923+0000 mon.a (mon.0) 1417 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:57.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:57 vm03 bash[23394]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:57 vm00 bash[28403]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.303912+0000 mgr.y (mgr.24425) 271 : cluster [DBG] pgmap v477: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.481576+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.484971+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.487606+0000 mon.a (mon.0) 1419 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: audit 2026-03-10T14:58:56.489059+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:57.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:57 vm00 bash[20726]: cluster 2026-03-10T14:58:56.780393+0000 mon.a (mon.0) 1421 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.805 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:58 vm03 bash[23394]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:58 vm00 bash[20726]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.490915+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: cluster 2026-03-10T14:58:57.495502+0000 mon.a (mon.0) 1423 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.495879+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:58.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:58 vm00 bash[28403]: audit 2026-03-10T14:58:57.511615+0000 mon.a (mon.0) 1424 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-10T14:58:59.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:58:58 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:58:59 vm03 bash[23394]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:58:59 vm00 bash[20726]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: cluster 2026-03-10T14:58:58.304182+0000 mgr.y (mgr.24425) 272 : cluster [DBG] pgmap v480: 196 pgs: 196 active+clean; 455 KiB data, 549 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.511604+0000 mon.a (mon.0) 1425 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.527867+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: cluster 2026-03-10T14:58:58.530902+0000 mon.a (mon.0) 1426 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:58:59.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:58:59 vm00 bash[28403]: audit 2026-03-10T14:58:58.531936+0000 mon.a (mon.0) 1427 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:00.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:00 vm03 bash[23394]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:00.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:00 vm00 bash[28403]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:58.808304+0000 mgr.y (mgr.24425) 273 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.515537+0000 mon.a (mon.0) 1428 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.519701+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.100:0/3492376873' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: cluster 2026-03-10T14:58:59.522213+0000 mon.a (mon.0) 1429 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:58:59.525847+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: audit 2026-03-10T14:59:00.519092+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:00.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:00 vm00 bash[20726]: cluster 2026-03-10T14:59:00.522629+0000 mon.a (mon.0) 1432 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T14:59:01.532 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_applications PASSED [ 84%] 2026-03-10T14:59:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:01 vm03 bash[23394]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:01 vm03 bash[23394]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:01 vm03 bash[23394]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:01.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:01 vm03 bash[23394]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:01 vm00 bash[20726]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:01 vm00 bash[20726]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:01 vm00 bash[20726]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:01 vm00 bash[20726]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:01 vm00 bash[28403]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:01 vm00 bash[28403]: cluster 2026-03-10T14:59:00.304782+0000 mgr.y (mgr.24425) 274 : cluster [DBG] pgmap v483: 196 pgs: 196 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:01 vm00 bash[28403]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:01.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:01 vm00 bash[28403]: cluster 2026-03-10T14:59:01.524873+0000 mon.a (mon.0) 1433 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:03 vm03 bash[23394]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:03 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:59:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:59:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:03 vm00 bash[28403]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: cluster 2026-03-10T14:59:02.305063+0000 mgr.y (mgr.24425) 275 : cluster [DBG] pgmap v486: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: cluster 2026-03-10T14:59:02.555850+0000 mon.a (mon.0) 1434 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: audit 2026-03-10T14:59:02.574954+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.100:0/1250917944' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:03.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:03 vm00 bash[20726]: audit 2026-03-10T14:59:02.579064+0000 mon.a (mon.0) 1435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:04.591 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_service_daemon PASSED [ 85%] 2026-03-10T14:59:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:04 vm03 bash[23394]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:04 vm03 bash[23394]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:04 vm03 bash[23394]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:04.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:04 vm03 bash[23394]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:04.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:04 vm00 bash[20726]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:04 vm00 bash[20726]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:04 vm00 bash[20726]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:04 vm00 bash[20726]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:04.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:04 vm00 bash[28403]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:04 vm00 bash[28403]: audit 2026-03-10T14:59:03.555419+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:04.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:04 vm00 bash[28403]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:04.967 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:04 vm00 bash[28403]: cluster 2026-03-10T14:59:03.558501+0000 mon.a (mon.0) 1437 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:05.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:05 vm03 bash[23394]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:05.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:05 vm00 bash[20726]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.305426+0000 mgr.y (mgr.24425) 276 : cluster [DBG] pgmap v489: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.581426+0000 mon.a (mon.0) 1438 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:05.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:05 vm00 bash[28403]: cluster 2026-03-10T14:59:04.588594+0000 mon.a (mon.0) 1439 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:06 vm00 bash[20726]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:06.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:06 vm00 bash[28403]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: audit 2026-03-10T14:59:05.626316+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.100:0/883792124' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: cluster 2026-03-10T14:59:05.626948+0000 mon.a (mon.0) 1440 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:07.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:06 vm03 bash[23394]: audit 2026-03-10T14:59:05.635386+0000 mon.a (mon.0) 1441 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:07.690 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_alignment PASSED [ 86%] 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:07 vm00 bash[20726]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:07.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:07 vm00 bash[28403]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: cluster 2026-03-10T14:59:06.305726+0000 mgr.y (mgr.24425) 277 : cluster [DBG] pgmap v492: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: audit 2026-03-10T14:59:06.621088+0000 mon.a (mon.0) 1442 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:08.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:07 vm03 bash[23394]: cluster 2026-03-10T14:59:06.625070+0000 mon.a (mon.0) 1443 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:08.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:08 vm00 bash[20726]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:08.965 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:08 vm00 bash[28403]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:09.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:59:08 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: cluster 2026-03-10T14:59:07.686842+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: audit 2026-03-10T14:59:07.705088+0000 mon.a (mon.0) 1445 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:09.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:08 vm03 bash[23394]: audit 2026-03-10T14:59:08.612078+0000 mon.a (mon.0) 1446 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:10.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:09 vm03 bash[23394]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:09 vm00 bash[28403]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: cluster 2026-03-10T14:59:08.305990+0000 mgr.y (mgr.24425) 278 : cluster [DBG] pgmap v495: 164 pgs: 164 active+clean; 455 KiB data, 550 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.715209+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: cluster 2026-03-10T14:59:08.722233+0000 mon.a (mon.0) 1448 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.722846+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.969447+0000 mon.a (mon.0) 1450 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.970050+0000 mon.a (mon.0) 1451 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:10.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:09 vm00 bash[20726]: audit 2026-03-10T14:59:08.977037+0000 mon.a (mon.0) 1452 : audit [INF] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:11.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:10 vm03 bash[23394]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:10 vm00 bash[28403]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: audit 2026-03-10T14:59:08.816321+0000 mgr.y (mgr.24425) 279 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: cluster 2026-03-10T14:59:09.731026+0000 mon.a (mon.0) 1453 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:11.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:10 vm00 bash[20726]: audit 2026-03-10T14:59:10.101277+0000 mon.a (mon.0) 1454 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:11 vm03 bash[23394]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:11 vm00 bash[28403]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:10.306534+0000 mgr.y (mgr.24425) 280 : cluster [DBG] pgmap v498: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:10.792007+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:10.797832+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:10.816198+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: audit 2026-03-10T14:59:11.795322+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.100:0/2054115216' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:11 vm00 bash[20726]: cluster 2026-03-10T14:59:11.809229+0000 mon.a (mon.0) 1459 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T14:59:12.808 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctxEc::test_alignment PASSED [ 87%] 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:14.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:13 vm03 bash[23394]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:13 vm00 bash[28403]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:14.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:13 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:59:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.306875+0000 mgr.y (mgr.24425) 281 : cluster [DBG] pgmap v501: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.809835+0000 mon.a (mon.0) 1460 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:14.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:13 vm00 bash[20726]: cluster 2026-03-10T14:59:12.822661+0000 mon.a (mon.0) 1461 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:14 vm03 bash[23394]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:14 vm00 bash[28403]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: cluster 2026-03-10T14:59:13.839026+0000 mon.a (mon.0) 1462 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: audit 2026-03-10T14:59:13.848098+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.100:0/636317420' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: audit 2026-03-10T14:59:13.851241+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.215 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:14 vm00 bash[20726]: cluster 2026-03-10T14:59:14.307274+0000 mgr.y (mgr.24425) 282 : cluster [DBG] pgmap v504: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:15.841 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_last_version PASSED [ 89%] 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:16.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:15 vm03 bash[23394]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:15 vm00 bash[28403]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: audit 2026-03-10T14:59:14.834128+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: cluster 2026-03-10T14:59:14.847585+0000 mon.a (mon.0) 1465 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:16.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:15 vm00 bash[20726]: cluster 2026-03-10T14:59:15.843264+0000 mon.a (mon.0) 1466 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T14:59:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:17 vm03 bash[23394]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:17.375 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:17 vm03 bash[23394]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:17 vm00 bash[28403]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:17.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:17 vm00 bash[28403]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:17.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:17 vm00 bash[20726]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:17.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:17 vm00 bash[20726]: cluster 2026-03-10T14:59:16.307613+0000 mgr.y (mgr.24425) 283 : cluster [DBG] pgmap v507: 164 pgs: 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:18 vm00 bash[28403]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:18 vm00 bash[20726]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: cluster 2026-03-10T14:59:17.121080+0000 mon.a (mon.0) 1467 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: audit 2026-03-10T14:59:17.133478+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.100:0/811817609' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:18.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:18 vm03 bash[23394]: audit 2026-03-10T14:59:17.138774+0000 mon.a (mon.0) 1468 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:19.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:59:18 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:59:19.192 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_stats PASSED [ 90%] 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:19 vm00 bash[28403]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:19.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:19.465 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:19 vm00 bash[20726]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: audit 2026-03-10T14:59:18.136659+0000 mon.a (mon.0) 1469 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.140949+0000 mon.a (mon.0) 1470 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.307987+0000 mgr.y (mgr.24425) 284 : cluster [DBG] pgmap v510: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 551 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:19.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:19 vm03 bash[23394]: cluster 2026-03-10T14:59:18.473407+0000 mon.a (mon.0) 1471 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:20 vm00 bash[28403]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:20 vm00 bash[28403]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:20 vm00 bash[28403]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:20 vm00 bash[28403]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:20 vm00 bash[20726]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:20 vm00 bash[20726]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:20 vm00 bash[20726]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:20.464 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:20 vm00 bash[20726]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:20 vm03 bash[23394]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:20 vm03 bash[23394]: audit 2026-03-10T14:59:18.821592+0000 mgr.y (mgr.24425) 285 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:20 vm03 bash[23394]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:20.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:20 vm03 bash[23394]: cluster 2026-03-10T14:59:19.191520+0000 mon.a (mon.0) 1472 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T14:59:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:21 vm03 bash[23394]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:21 vm03 bash[23394]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:21 vm03 bash[23394]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:21.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:21 vm03 bash[23394]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:21 vm00 bash[28403]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:21 vm00 bash[28403]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:21 vm00 bash[28403]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:21 vm00 bash[28403]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:21 vm00 bash[20726]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:21 vm00 bash[20726]: cluster 2026-03-10T14:59:20.222509+0000 mon.a (mon.0) 1473 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:21 vm00 bash[20726]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:21.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:21 vm00 bash[20726]: cluster 2026-03-10T14:59:20.308457+0000 mgr.y (mgr.24425) 286 : cluster [DBG] pgmap v513: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:22.232 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_read PASSED [ 91%] 2026-03-10T14:59:22.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:22 vm03 bash[23394]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:22.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:22 vm03 bash[23394]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:22 vm00 bash[28403]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:22.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:22 vm00 bash[28403]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:22 vm00 bash[20726]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:22.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:22 vm00 bash[20726]: cluster 2026-03-10T14:59:21.226969+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T14:59:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:23 vm03 bash[23394]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:23 vm03 bash[23394]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:23 vm03 bash[23394]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:23.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:23 vm03 bash[23394]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:23 vm00 bash[28403]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:23 vm00 bash[28403]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:23 vm00 bash[28403]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:23 vm00 bash[28403]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:23 vm00 bash[20726]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:23 vm00 bash[20726]: cluster 2026-03-10T14:59:22.229666+0000 mon.a (mon.0) 1475 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:23 vm00 bash[20726]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:23.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:23 vm00 bash[20726]: cluster 2026-03-10T14:59:22.308707+0000 mgr.y (mgr.24425) 287 : cluster [DBG] pgmap v516: 164 pgs: 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:24.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:23 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:59:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:59:24.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:24 vm03 bash[23394]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:24.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:24 vm03 bash[23394]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:24 vm00 bash[28403]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:24.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:24 vm00 bash[28403]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:24 vm00 bash[20726]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:24.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:24 vm00 bash[20726]: cluster 2026-03-10T14:59:23.265521+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T14:59:25.267 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_seek PASSED [ 92%] 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:25.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:25 vm03 bash[23394]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:25 vm00 bash[28403]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:24.260028+0000 mon.a (mon.0) 1477 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:24.309014+0000 mgr.y (mgr.24425) 288 : cluster [DBG] pgmap v519: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 552 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: audit 2026-03-10T14:59:25.106986+0000 mon.a (mon.0) 1478 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:25.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:25 vm00 bash[20726]: cluster 2026-03-10T14:59:25.261048+0000 mon.a (mon.0) 1479 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T14:59:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:26 vm03 bash[23394]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:26 vm03 bash[23394]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:26 vm03 bash[23394]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:26.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:26 vm03 bash[23394]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:26 vm00 bash[28403]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:26 vm00 bash[28403]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:26 vm00 bash[28403]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:26 vm00 bash[28403]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:26 vm00 bash[20726]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:26 vm00 bash[20726]: cluster 2026-03-10T14:59:25.290135+0000 mon.a (mon.0) 1480 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:26 vm00 bash[20726]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:26.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:26 vm00 bash[20726]: cluster 2026-03-10T14:59:26.280185+0000 mon.a (mon.0) 1481 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T14:59:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:27 vm03 bash[23394]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:27 vm03 bash[23394]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:27 vm03 bash[23394]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:27.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:27 vm03 bash[23394]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:27 vm00 bash[28403]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:27 vm00 bash[28403]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:27 vm00 bash[28403]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:27 vm00 bash[28403]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:27 vm00 bash[20726]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:27 vm00 bash[20726]: cluster 2026-03-10T14:59:26.309282+0000 mgr.y (mgr.24425) 289 : cluster [DBG] pgmap v522: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:27 vm00 bash[20726]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:27.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:27 vm00 bash[20726]: cluster 2026-03-10T14:59:27.273581+0000 mon.a (mon.0) 1482 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T14:59:28.275 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_write PASSED [ 93%] 2026-03-10T14:59:29.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:59:28 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:59:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:29 vm03 bash[23394]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:29 vm03 bash[23394]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:29 vm03 bash[23394]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:29.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:29 vm03 bash[23394]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:29 vm00 bash[28403]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:29 vm00 bash[28403]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:29 vm00 bash[28403]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:29 vm00 bash[28403]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:29 vm00 bash[20726]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:29 vm00 bash[20726]: cluster 2026-03-10T14:59:28.276490+0000 mon.a (mon.0) 1483 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:29 vm00 bash[20726]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:29.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:29 vm00 bash[20726]: cluster 2026-03-10T14:59:28.309542+0000 mgr.y (mgr.24425) 290 : cluster [DBG] pgmap v525: 164 pgs: 164 active+clean; 455 KiB data, 556 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:30 vm00 bash[28403]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:30 vm00 bash[28403]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:30 vm00 bash[28403]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:30 vm00 bash[28403]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:30 vm00 bash[20726]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:30 vm00 bash[20726]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:30 vm00 bash[20726]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:30.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:30 vm00 bash[20726]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:30 vm03 bash[23394]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:30 vm03 bash[23394]: audit 2026-03-10T14:59:28.826974+0000 mgr.y (mgr.24425) 291 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:30 vm03 bash[23394]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:31.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:30 vm03 bash[23394]: cluster 2026-03-10T14:59:29.648876+0000 mon.a (mon.0) 1484 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:31.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:31 vm03 bash[23394]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:31 vm00 bash[28403]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.310136+0000 mgr.y (mgr.24425) 292 : cluster [DBG] pgmap v527: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.556971+0000 mon.a (mon.0) 1485 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:30.564268+0000 mon.a (mon.0) 1486 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:31.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:31 vm00 bash[20726]: cluster 2026-03-10T14:59:31.567345+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:33 vm00 bash[28403]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:33 vm00 bash[28403]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:33 vm00 bash[28403]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:33 vm00 bash[28403]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:33 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:59:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:33 vm00 bash[20726]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:33 vm00 bash[20726]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:33 vm00 bash[20726]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:33.964 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:33 vm00 bash[20726]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:33 vm03 bash[23394]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:33 vm03 bash[23394]: cluster 2026-03-10T14:59:32.310439+0000 mgr.y (mgr.24425) 293 : cluster [DBG] pgmap v530: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:33 vm03 bash[23394]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:34.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:33 vm03 bash[23394]: cluster 2026-03-10T14:59:32.735975+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T14:59:35.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:34 vm03 bash[23394]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:35.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:34 vm03 bash[23394]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:35.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:34 vm00 bash[28403]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:35.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:34 vm00 bash[28403]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:35.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:34 vm00 bash[20726]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:35.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:34 vm00 bash[20726]: cluster 2026-03-10T14:59:33.754457+0000 mon.a (mon.0) 1489 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:35 vm03 bash[23394]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:35 vm00 bash[28403]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: cluster 2026-03-10T14:59:34.310791+0000 mgr.y (mgr.24425) 294 : cluster [DBG] pgmap v533: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: cluster 2026-03-10T14:59:34.767602+0000 mon.a (mon.0) 1490 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: audit 2026-03-10T14:59:34.772030+0000 mon.c (mon.2) 54 : audit [INF] from='client.? 192.168.123.100:0/3531392841' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:35 vm00 bash[20726]: audit 2026-03-10T14:59:34.772420+0000 mon.a (mon.0) 1491 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-10T14:59:36.767 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoCtxSelfManagedSnaps::test PASSED [ 94%] 2026-03-10T14:59:36.800 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_monmap_dump PASSED [ 95%] 2026-03-10T14:59:36.814 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_osd_bench PASSED [ 96%] 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:36 vm03 bash[23394]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:36 vm00 bash[28403]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: audit 2026-03-10T14:59:35.758090+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: cluster 2026-03-10T14:59:35.767608+0000 mon.a (mon.0) 1493 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:36 vm00 bash[20726]: cluster 2026-03-10T14:59:36.765661+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T14:59:37.817 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_ceph_osd_pool_create_utf8 PASSED [ 97%] 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.126 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:37 vm03 bash[23394]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.225 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:37 vm00 bash[28403]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: cluster 2026-03-10T14:59:36.311110+0000 mgr.y (mgr.24425) 295 : cluster [DBG] pgmap v536: 196 pgs: 196 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.785071+0000 mon.a (mon.0) 1495 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.786343+0000 mon.a (mon.0) 1496 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.787492+0000 mon.a (mon.0) 1497 : audit [DBG] from='client.? 192.168.123.100:0/1557689624' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.822800+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.100:0/3252859340' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:38.226 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:37 vm00 bash[20726]: audit 2026-03-10T14:59:36.826937+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-10T14:59:39.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:59:38 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:39.125 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:38 vm03 bash[23394]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:38 vm00 bash[28403]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: audit 2026-03-10T14:59:37.813390+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: cluster 2026-03-10T14:59:37.834505+0000 mon.a (mon.0) 1500 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:39.214 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:38 vm00 bash[20726]: cluster 2026-03-10T14:59:38.311377+0000 mgr.y (mgr.24425) 296 : cluster [DBG] pgmap v539: 180 pgs: 16 unknown, 164 active+clean; 455 KiB data, 557 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:40 vm03 bash[23394]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:40 vm03 bash[23394]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:40 vm03 bash[23394]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:40.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:40 vm03 bash[23394]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:40 vm00 bash[28403]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:40 vm00 bash[28403]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:40 vm00 bash[28403]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:40 vm00 bash[28403]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:40 vm00 bash[20726]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:40 vm00 bash[20726]: audit 2026-03-10T14:59:38.836394+0000 mgr.y (mgr.24425) 297 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:40 vm00 bash[20726]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:40.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:40 vm00 bash[20726]: cluster 2026-03-10T14:59:38.853051+0000 mon.a (mon.0) 1501 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T14:59:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:41.626 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:41 vm03 bash[23394]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:41 vm00 bash[28403]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: audit 2026-03-10T14:59:40.112680+0000 mon.a (mon.0) 1502 : audit [DBG] from='mgr.24425 192.168.123.100:0/471932685' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: cluster 2026-03-10T14:59:40.158192+0000 mon.a (mon.0) 1503 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:41.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:41 vm00 bash[20726]: cluster 2026-03-10T14:59:40.311838+0000 mgr.y (mgr.24425) 298 : cluster [DBG] pgmap v542: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:59:42.255 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test PASSED [ 98%] 2026-03-10T14:59:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:42 vm03 bash[23394]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:42 vm03 bash[23394]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:42 vm03 bash[23394]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:42.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:42 vm03 bash[23394]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:42 vm00 bash[28403]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:42 vm00 bash[28403]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:42 vm00 bash[28403]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:42 vm00 bash[28403]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:42 vm00 bash[20726]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:42 vm00 bash[20726]: cluster 2026-03-10T14:59:41.234352+0000 mon.a (mon.0) 1504 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:42 vm00 bash[20726]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:42.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:42 vm00 bash[20726]: cluster 2026-03-10T14:59:41.246190+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T14:59:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:43 vm03 bash[23394]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:43 vm03 bash[23394]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:43 vm03 bash[23394]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:43.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:43 vm03 bash[23394]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:43 vm00 bash[28403]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:43 vm00 bash[28403]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:43 vm00 bash[28403]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:43 vm00 bash[28403]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:43 vm00 bash[20726]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:43 vm00 bash[20726]: cluster 2026-03-10T14:59:42.250339+0000 mon.a (mon.0) 1506 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:43 vm00 bash[20726]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:43.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:43 vm00 bash[20726]: cluster 2026-03-10T14:59:42.312218+0000 mgr.y (mgr.24425) 299 : cluster [DBG] pgmap v545: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:59:44.214 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:43 vm00 bash[21005]: ::ffff:192.168.123.103 - - [10/Mar/2026:14:59:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:59:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:44 vm03 bash[23394]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:44.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:44 vm03 bash[23394]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:44.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:44 vm00 bash[20726]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:44.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:44 vm00 bash[20726]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:44 vm00 bash[28403]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:44.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:44 vm00 bash[28403]: cluster 2026-03-10T14:59:43.286245+0000 mon.a (mon.0) 1507 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:45.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:45 vm03 bash[23394]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:45 vm00 bash[28403]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:44.294403+0000 mon.a (mon.0) 1508 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:44.312532+0000 mgr.y (mgr.24425) 300 : cluster [DBG] pgmap v548: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:45.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:45 vm00 bash[20726]: cluster 2026-03-10T14:59:45.286100+0000 mon.a (mon.0) 1509 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test_aio_notify PASSED [100%] 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout:=============================== warnings summary =============================== 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:210 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:210: DeprecationWarning: invalid escape sequence '\-' 2026-03-10T14:59:46.336 INFO:tasks.workunit.client.0.vm00.stdout: assert re.match('[0-9a-f\-]{36}', fsid, re.I) 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:960 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:960: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: @pytest.mark.wait 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:996 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:996: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: @pytest.mark.wait 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:1024 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:1024: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: @pytest.mark.wait 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout::210 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: :210: DeprecationWarning: invalid escape sequence '\-' 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout:-- Docs: https://docs.pytest.org/en/stable/warnings.html 2026-03-10T14:59:46.337 INFO:tasks.workunit.client.0.vm00.stdout:================= 91 passed, 13 warnings in 338.72s (0:05:38) ================== 2026-03-10T14:59:46.361 INFO:tasks.workunit.client.0.vm00.stderr:+ exit 0 2026-03-10T14:59:46.361 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T14:59:46.361 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T14:59:46.369 INFO:tasks.workunit:Stopping ['rados/test_python.sh'] on client.0... 2026-03-10T14:59:46.369 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T14:59:46.855 DEBUG:teuthology.parallel:result is None 2026-03-10T14:59:46.855 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:59:46.864 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:59:46.865 DEBUG:teuthology.orchestra.run.vm00:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T14:59:46.910 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:59:46.910 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T14:59:46.912 INFO:tasks.cephadm:Teardown begin 2026-03-10T14:59:46.913 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:59:46.958 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:59:46.966 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T14:59:46.966 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 -- ceph mgr module disable cephadm 2026-03-10T14:59:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:47 vm03 bash[23394]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:47 vm03 bash[23394]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:47 vm03 bash[23394]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:47.625 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:47 vm03 bash[23394]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:47 vm00 bash[28403]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:47 vm00 bash[28403]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:47 vm00 bash[28403]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:47 vm00 bash[28403]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:47 vm00 bash[20726]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:47 vm00 bash[20726]: cluster 2026-03-10T14:59:46.312966+0000 mgr.y (mgr.24425) 301 : cluster [DBG] pgmap v550: 212 pgs: 212 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 251 B/s wr, 1 op/s 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:47 vm00 bash[20726]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:47.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:47 vm00 bash[20726]: cluster 2026-03-10T14:59:46.337963+0000 mon.a (mon.0) 1510 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T14:59:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:48 vm00 bash[20726]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:48.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:48 vm00 bash[20726]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:48 vm00 bash[28403]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:48.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:48 vm00 bash[28403]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:48.843 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:48 vm03 bash[23394]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:48.843 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:48 vm03 bash[23394]: cluster 2026-03-10T14:59:48.341465+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:59:49.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 14:59:48 vm03 bash[48459]: debug there is no tcmu-runner data available 2026-03-10T14:59:49.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:49 vm00 bash[20726]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:49.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:49 vm00 bash[20726]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:49 vm00 bash[28403]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:49.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:49 vm00 bash[28403]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:49 vm03 bash[23394]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:49.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:49 vm03 bash[23394]: cluster 2026-03-10T14:59:48.313275+0000 mgr.y (mgr.24425) 302 : cluster [DBG] pgmap v552: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1020 B/s rd, 0 op/s 2026-03-10T14:59:50.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:50 vm00 bash[28403]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:50.714 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:50 vm00 bash[28403]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:50.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:50 vm00 bash[20726]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:50.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:50 vm00 bash[20726]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:50.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:50 vm03 bash[23394]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:50.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:50 vm03 bash[23394]: audit 2026-03-10T14:59:48.846603+0000 mgr.y (mgr.24425) 303 : audit [DBG] from='client.14514 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:59:51.632 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/mon.c/config 2026-03-10T14:59:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:51 vm00 bash[20726]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.715 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:51 vm00 bash[20726]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:51 vm00 bash[28403]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.715 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:51 vm00 bash[28403]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T14:59:51.838+0000 7f5782e34640 -1 monclient: keyring not found 2026-03-10T14:59:51.841 INFO:teuthology.orchestra.run.vm00.stderr:[errno 21] error connecting to the cluster 2026-03-10T14:59:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:51 vm03 bash[23394]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.875 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:51 vm03 bash[23394]: cluster 2026-03-10T14:59:50.313700+0000 mgr.y (mgr.24425) 304 : cluster [DBG] pgmap v553: 180 pgs: 180 active+clean; 455 KiB data, 558 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T14:59:51.887 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:59:51.888 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T14:59:51.888 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:59:51.890 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:59:51.893 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T14:59:51.893 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T14:59:51.893 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a 2026-03-10T14:59:52.003 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:51 vm00 systemd[1]: Stopping Ceph mon.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:52.003 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:51 vm00 bash[20726]: debug 2026-03-10T14:59:51.986+0000 7f1f69a2a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T14:59:52.003 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:51 vm00 bash[20726]: debug 2026-03-10T14:59:51.986+0000 7f1f69a2a640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T14:59:52.204 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STOPPING 2026-03-10T14:59:52.267 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 14:59:52 vm00 bash[61369]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-mon-a 2026-03-10T14:59:52.268 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.a.service' 2026-03-10T14:59:52.281 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:52.281 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T14:59:52.282 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T14:59:52.282 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.c 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:52 vm00 systemd[1]: Stopping Ceph mon.c for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:52 vm00 bash[28403]: debug 2026-03-10T14:59:52.402+0000 7fdbc2f13640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mon.c.vm00.stdout:Mar 10 14:59:52 vm00 bash[28403]: debug 2026-03-10T14:59:52.402+0000 7fdbc2f13640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STOPPED 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STARTING 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Serving on http://:::9283 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STARTED 2026-03-10T14:59:52.643 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STOPPING 2026-03-10T14:59:52.720 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.c.service' 2026-03-10T14:59:52.732 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:52.732 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T14:59:52.732 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T14:59:52.732 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.b 2026-03-10T14:59:52.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:59:52.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STOPPED 2026-03-10T14:59:52.964 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STARTING 2026-03-10T14:59:53.052 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:52 vm03 systemd[1]: Stopping Ceph mon.b for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:53.052 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:52 vm03 bash[23394]: debug 2026-03-10T14:59:52.801+0000 7f77dc5b4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T14:59:53.052 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 14:59:52 vm03 bash[23394]: debug 2026-03-10T14:59:52.801+0000 7f77dc5b4640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T14:59:53.106 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mon.b.service' 2026-03-10T14:59:53.120 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:53.120 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T14:59:53.120 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T14:59:53.120 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y 2026-03-10T14:59:53.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Serving on http://:::9283 2026-03-10T14:59:53.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:52 vm00 bash[21005]: [10/Mar/2026:14:59:52] ENGINE Bus STARTED 2026-03-10T14:59:53.251 INFO:journalctl@ceph.mgr.y.vm00.stdout:Mar 10 14:59:53 vm00 systemd[1]: Stopping Ceph mgr.y for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:53.304 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.y.service' 2026-03-10T14:59:53.315 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:53.315 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T14:59:53.315 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T14:59:53.315 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.x 2026-03-10T14:59:53.375 INFO:journalctl@ceph.mgr.x.vm03.stdout:Mar 10 14:59:53 vm03 systemd[1]: Stopping Ceph mgr.x for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:53.470 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@mgr.x.service' 2026-03-10T14:59:53.484 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:53.484 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T14:59:53.484 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T14:59:53.484 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.0 2026-03-10T14:59:53.711 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:59:53 vm00 systemd[1]: Stopping Ceph osd.0 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:53.964 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:59:53 vm00 bash[31304]: debug 2026-03-10T14:59:53.710+0000 7f51c2f5e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T14:59:53.964 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:59:53 vm00 bash[31304]: debug 2026-03-10T14:59:53.710+0000 7f51c2f5e640 -1 osd.0 396 *** Got signal Terminated *** 2026-03-10T14:59:53.964 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:59:53 vm00 bash[31304]: debug 2026-03-10T14:59:53.710+0000 7f51c2f5e640 -1 osd.0 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.331Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.331Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.332Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.332Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.332Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:54.625 INFO:journalctl@ceph.prometheus.a.vm03.stdout:Mar 10 14:59:54 vm03 bash[51311]: ts=2026-03-10T14:59:54.332Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.100:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.100:8765: connect: connection refused" 2026-03-10T14:59:59.096 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 14:59:58 vm00 bash[61651]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-0 2026-03-10T14:59:59.129 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.0.service' 2026-03-10T14:59:59.140 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:59:59.140 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T14:59:59.140 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T14:59:59.140 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.1 2026-03-10T14:59:59.464 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:59:59 vm00 systemd[1]: Stopping Ceph osd.1 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T14:59:59.464 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:59:59 vm00 bash[37070]: debug 2026-03-10T14:59:59.234+0000 7fe4a583a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T14:59:59.464 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:59:59 vm00 bash[37070]: debug 2026-03-10T14:59:59.234+0000 7fe4a583a640 -1 osd.1 396 *** Got signal Terminated *** 2026-03-10T14:59:59.464 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 14:59:59 vm00 bash[37070]: debug 2026-03-10T14:59:59.234+0000 7fe4a583a640 -1 osd.1 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:04.583 INFO:journalctl@ceph.osd.1.vm00.stdout:Mar 10 15:00:04 vm00 bash[61839]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-1 2026-03-10T15:00:04.629 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.1.service' 2026-03-10T15:00:04.641 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:04.642 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T15:00:04.642 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T15:00:04.642 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.2 2026-03-10T15:00:04.964 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 15:00:04 vm00 systemd[1]: Stopping Ceph osd.2 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:04.964 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 15:00:04 vm00 bash[43300]: debug 2026-03-10T15:00:04.730+0000 7ff5f953c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:04.964 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 15:00:04 vm00 bash[43300]: debug 2026-03-10T15:00:04.730+0000 7ff5f953c640 -1 osd.2 396 *** Got signal Terminated *** 2026-03-10T15:00:04.964 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 15:00:04 vm00 bash[43300]: debug 2026-03-10T15:00:04.730+0000 7ff5f953c640 -1 osd.2 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:10.105 INFO:journalctl@ceph.osd.2.vm00.stdout:Mar 10 15:00:09 vm00 bash[62025]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-2 2026-03-10T15:00:10.237 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.2.service' 2026-03-10T15:00:10.260 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:10.260 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T15:00:10.260 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T15:00:10.260 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.3 2026-03-10T15:00:10.397 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 15:00:10 vm00 systemd[1]: Stopping Ceph osd.3 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:10.714 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 15:00:10 vm00 bash[49185]: debug 2026-03-10T15:00:10.394+0000 7ff7d09b1640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:10.714 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 15:00:10 vm00 bash[49185]: debug 2026-03-10T15:00:10.394+0000 7ff7d09b1640 -1 osd.3 396 *** Got signal Terminated *** 2026-03-10T15:00:10.714 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 15:00:10 vm00 bash[49185]: debug 2026-03-10T15:00:10.394+0000 7ff7d09b1640 -1 osd.3 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:15.714 INFO:journalctl@ceph.osd.3.vm00.stdout:Mar 10 15:00:15 vm00 bash[62200]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-3 2026-03-10T15:00:15.800 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.3.service' 2026-03-10T15:00:15.811 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:15.811 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T15:00:15.811 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T15:00:15.811 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.4 2026-03-10T15:00:16.125 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 15:00:15 vm03 systemd[1]: Stopping Ceph osd.4 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:16.125 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 15:00:15 vm03 bash[26650]: debug 2026-03-10T15:00:15.857+0000 7f7714266640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:16.125 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 15:00:15 vm03 bash[26650]: debug 2026-03-10T15:00:15.857+0000 7f7714266640 -1 osd.4 396 *** Got signal Terminated *** 2026-03-10T15:00:16.125 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 15:00:15 vm03 bash[26650]: debug 2026-03-10T15:00:15.857+0000 7f7714266640 -1 osd.4 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:20.128 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:19 vm03 bash[32416]: debug 2026-03-10T15:00:19.637+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:20.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:20 vm03 bash[32416]: debug 2026-03-10T15:00:20.593+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:20.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:20 vm03 bash[38461]: debug 2026-03-10T15:00:20.369+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:21.159 INFO:journalctl@ceph.osd.4.vm03.stdout:Mar 10 15:00:21 vm03 bash[52314]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-4 2026-03-10T15:00:21.373 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.4.service' 2026-03-10T15:00:21.384 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:21.384 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T15:00:21.384 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T15:00:21.384 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.5 2026-03-10T15:00:21.431 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:21 vm03 bash[38461]: debug 2026-03-10T15:00:21.353+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:21.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:21 vm03 systemd[1]: Stopping Ceph osd.5 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:21.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:21 vm03 bash[32416]: debug 2026-03-10T15:00:21.473+0000 7f3f52a78640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:21.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:21 vm03 bash[32416]: debug 2026-03-10T15:00:21.473+0000 7f3f52a78640 -1 osd.5 396 *** Got signal Terminated *** 2026-03-10T15:00:21.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:21 vm03 bash[32416]: debug 2026-03-10T15:00:21.473+0000 7f3f52a78640 -1 osd.5 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:21.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:21 vm03 bash[44271]: debug 2026-03-10T15:00:21.429+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:21.875 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:21 vm03 bash[32416]: debug 2026-03-10T15:00:21.621+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:22.595 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:22 vm03 bash[44271]: debug 2026-03-10T15:00:22.393+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:22.595 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:22 vm03 bash[38461]: debug 2026-03-10T15:00:22.337+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:22.875 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:22 vm03 bash[32416]: debug 2026-03-10T15:00:22.593+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:23.563 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:23 vm03 bash[44271]: debug 2026-03-10T15:00:23.397+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:23.563 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:23 vm03 bash[38461]: debug 2026-03-10T15:00:23.289+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:23.875 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:23 vm03 bash[32416]: debug 2026-03-10T15:00:23.557+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:24.554 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:24 vm03 bash[44271]: debug 2026-03-10T15:00:24.353+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:24.554 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:24 vm03 bash[38461]: debug 2026-03-10T15:00:24.241+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:24.875 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:24 vm03 bash[32416]: debug 2026-03-10T15:00:24.549+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:25.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:25 vm03 bash[32416]: debug 2026-03-10T15:00:25.525+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:55.689115+0000 front 2026-03-10T14:59:55.689251+0000 (oldest deadline 2026-03-10T15:00:19.188818+0000) 2026-03-10T15:00:25.625 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:25 vm03 bash[32416]: debug 2026-03-10T15:00:25.525+0000 7f3f4e890640 -1 osd.5 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:03.289638+0000 front 2026-03-10T15:00:03.289697+0000 (oldest deadline 2026-03-10T15:00:24.989341+0000) 2026-03-10T15:00:25.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:25 vm03 bash[38461]: debug 2026-03-10T15:00:25.277+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:25.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:25 vm03 bash[44271]: debug 2026-03-10T15:00:25.397+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:25.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:25 vm03 bash[44271]: debug 2026-03-10T15:00:25.397+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:26.517 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:26 vm03 bash[44271]: debug 2026-03-10T15:00:26.377+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:26.517 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:26 vm03 bash[44271]: debug 2026-03-10T15:00:26.377+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:26.517 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:26 vm03 bash[38461]: debug 2026-03-10T15:00:26.245+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:26.875 INFO:journalctl@ceph.osd.5.vm03.stdout:Mar 10 15:00:26 vm03 bash[52500]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-5 2026-03-10T15:00:27.076 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.5.service' 2026-03-10T15:00:27.135 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:27.135 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T15:00:27.135 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T15:00:27.135 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.6 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 systemd[1]: Stopping Ceph osd.6 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 bash[38461]: debug 2026-03-10T15:00:27.225+0000 7f48105ec640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 bash[38461]: debug 2026-03-10T15:00:27.225+0000 7f48105ec640 -1 osd.6 396 *** Got signal Terminated *** 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 bash[38461]: debug 2026-03-10T15:00:27.225+0000 7f48105ec640 -1 osd.6 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 bash[38461]: debug 2026-03-10T15:00:27.285+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:27.337 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:27 vm03 bash[38461]: debug 2026-03-10T15:00:27.285+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:02.119124+0000 front 2026-03-10T15:00:02.119078+0000 (oldest deadline 2026-03-10T15:00:26.818738+0000) 2026-03-10T15:00:27.544 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:27 vm03 bash[44271]: debug 2026-03-10T15:00:27.333+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:27.545 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:27 vm03 bash[44271]: debug 2026-03-10T15:00:27.333+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:28.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:28 vm03 bash[44271]: debug 2026-03-10T15:00:28.377+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:28.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:28 vm03 bash[44271]: debug 2026-03-10T15:00:28.377+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:28.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:28 vm03 bash[38461]: debug 2026-03-10T15:00:28.249+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:28.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:28 vm03 bash[38461]: debug 2026-03-10T15:00:28.249+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:02.119124+0000 front 2026-03-10T15:00:02.119078+0000 (oldest deadline 2026-03-10T15:00:26.818738+0000) 2026-03-10T15:00:29.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:29 vm03 bash[38461]: debug 2026-03-10T15:00:29.285+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:29.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:29 vm03 bash[38461]: debug 2026-03-10T15:00:29.285+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:02.119124+0000 front 2026-03-10T15:00:02.119078+0000 (oldest deadline 2026-03-10T15:00:26.818738+0000) 2026-03-10T15:00:29.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:29 vm03 bash[44271]: debug 2026-03-10T15:00:29.421+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:29.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:29 vm03 bash[44271]: debug 2026-03-10T15:00:29.421+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:30.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:30 vm03 bash[44271]: debug 2026-03-10T15:00:30.457+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:30.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:30 vm03 bash[44271]: debug 2026-03-10T15:00:30.457+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:30.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:30 vm03 bash[38461]: debug 2026-03-10T15:00:30.321+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:30.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:30 vm03 bash[38461]: debug 2026-03-10T15:00:30.321+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:02.119124+0000 front 2026-03-10T15:00:02.119078+0000 (oldest deadline 2026-03-10T15:00:26.818738+0000) 2026-03-10T15:00:31.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:31 vm03 bash[44271]: debug 2026-03-10T15:00:31.453+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:31.625 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:31 vm03 bash[44271]: debug 2026-03-10T15:00:31.453+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:31.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:31 vm03 bash[38461]: debug 2026-03-10T15:00:31.273+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:57.018612+0000 front 2026-03-10T14:59:57.018288+0000 (oldest deadline 2026-03-10T15:00:19.918047+0000) 2026-03-10T15:00:31.625 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:31 vm03 bash[38461]: debug 2026-03-10T15:00:31.273+0000 7f480c404640 -1 osd.6 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:02.119124+0000 front 2026-03-10T15:00:02.119078+0000 (oldest deadline 2026-03-10T15:00:26.818738+0000) 2026-03-10T15:00:32.529 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 bash[44271]: debug 2026-03-10T15:00:32.457+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:32.529 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 bash[44271]: debug 2026-03-10T15:00:32.457+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:32.529 INFO:journalctl@ceph.osd.6.vm03.stdout:Mar 10 15:00:32 vm03 bash[52686]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-6 2026-03-10T15:00:32.787 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.6.service' 2026-03-10T15:00:32.798 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:32.798 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T15:00:32.798 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T15:00:32.798 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.7 2026-03-10T15:00:33.125 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 systemd[1]: Stopping Ceph osd.7 for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:33.125 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 bash[44271]: debug 2026-03-10T15:00:32.885+0000 7fd610b24640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:33.125 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 bash[44271]: debug 2026-03-10T15:00:32.885+0000 7fd610b24640 -1 osd.7 396 *** Got signal Terminated *** 2026-03-10T15:00:33.125 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:32 vm03 bash[44271]: debug 2026-03-10T15:00:32.885+0000 7fd610b24640 -1 osd.7 396 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T15:00:33.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:33 vm03 bash[44271]: debug 2026-03-10T15:00:33.449+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:33.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:33 vm03 bash[44271]: debug 2026-03-10T15:00:33.449+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:34.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:34 vm03 bash[44271]: debug 2026-03-10T15:00:34.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:34.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:34 vm03 bash[44271]: debug 2026-03-10T15:00:34.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:34.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:34 vm03 bash[44271]: debug 2026-03-10T15:00:34.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-10T15:00:08.398207+0000 front 2026-03-10T15:00:08.398108+0000 (oldest deadline 2026-03-10T15:00:34.297765+0000) 2026-03-10T15:00:35.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:35 vm03 bash[44271]: debug 2026-03-10T15:00:35.429+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:35.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:35 vm03 bash[44271]: debug 2026-03-10T15:00:35.429+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:35.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:35 vm03 bash[44271]: debug 2026-03-10T15:00:35.429+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-10T15:00:08.398207+0000 front 2026-03-10T15:00:08.398108+0000 (oldest deadline 2026-03-10T15:00:34.297765+0000) 2026-03-10T15:00:36.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:36 vm03 bash[44271]: debug 2026-03-10T15:00:36.421+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:36.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:36 vm03 bash[44271]: debug 2026-03-10T15:00:36.421+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:36.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:36 vm03 bash[44271]: debug 2026-03-10T15:00:36.421+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-10T15:00:08.398207+0000 front 2026-03-10T15:00:08.398108+0000 (oldest deadline 2026-03-10T15:00:34.297765+0000) 2026-03-10T15:00:37.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:37 vm03 bash[44271]: debug 2026-03-10T15:00:37.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6803 osd.0 since back 2026-03-10T14:59:58.096871+0000 front 2026-03-10T14:59:58.096858+0000 (oldest deadline 2026-03-10T15:00:20.996441+0000) 2026-03-10T15:00:37.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:37 vm03 bash[44271]: debug 2026-03-10T15:00:37.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6807 osd.1 since back 2026-03-10T15:00:01.497611+0000 front 2026-03-10T15:00:01.497596+0000 (oldest deadline 2026-03-10T15:00:24.996966+0000) 2026-03-10T15:00:37.875 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:37 vm03 bash[44271]: debug 2026-03-10T15:00:37.465+0000 7fd60c93c640 -1 osd.7 396 heartbeat_check: no reply from 192.168.123.100:6811 osd.2 since back 2026-03-10T15:00:08.398207+0000 front 2026-03-10T15:00:08.398108+0000 (oldest deadline 2026-03-10T15:00:34.297765+0000) 2026-03-10T15:00:38.295 INFO:journalctl@ceph.osd.7.vm03.stdout:Mar 10 15:00:37 vm03 bash[52866]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-osd-7 2026-03-10T15:00:38.340 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@osd.7.service' 2026-03-10T15:00:38.350 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:38.350 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T15:00:38.350 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-10T15:00:38.351 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@rgw.foo.a 2026-03-10T15:00:38.714 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 15:00:38 vm00 systemd[1]: Stopping Ceph rgw.foo.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:38.714 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 15:00:38 vm00 bash[53572]: debug 2026-03-10T15:00:38.394+0000 7f9244cf6640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T15:00:38.714 INFO:journalctl@ceph.rgw.foo.a.vm00.stdout:Mar 10 15:00:38 vm00 bash[53572]: debug 2026-03-10T15:00:38.394+0000 7f9248565980 -1 shutting down 2026-03-10T15:00:48.473 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@rgw.foo.a.service' 2026-03-10T15:00:48.484 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:48.484 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-10T15:00:48.484 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T15:00:48.484 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@prometheus.a 2026-03-10T15:00:48.615 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@prometheus.a.service' 2026-03-10T15:00:48.625 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T15:00:48.625 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T15:00:48.625 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 --force --keep-logs 2026-03-10T15:00:48.716 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T15:00:53.609 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:53.609 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:53.878 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:53.878 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:53.878 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:53.878 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:54.152 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 systemd[1]: Stopping Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:54.152 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 bash[56709]: ts=2026-03-10T15:00:53.956Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T15:00:54.152 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:53 vm00 bash[62633]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-alertmanager-a 2026-03-10T15:00:54.152 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@alertmanager.a.service: Deactivated successfully. 2026-03-10T15:00:54.152 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: Stopped Ceph alertmanager.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T15:00:54.464 INFO:journalctl@ceph.alertmanager.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: Stopping Ceph node-exporter.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 bash[62755]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-node-exporter-a 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T15:00:54.464 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: Stopped Ceph node-exporter.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T15:00:54.793 INFO:journalctl@ceph.node-exporter.a.vm00.stdout:Mar 10 15:00:54 vm00 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:00:56.052 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 --force --keep-logs 2026-03-10T15:00:56.144 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T15:01:00.941 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:00 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:00.941 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:00 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:00.941 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:00 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.196 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.197 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.197 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.544 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.544 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.544 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.797 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.797 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:01.798 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:02.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:02.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: Stopping Ceph iscsi.iscsi.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:01:02.125 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:01 vm03 bash[48459]: debug Shutdown received 2026-03-10T15:01:02.125 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:02.125 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:01 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:11 vm03 bash[53358]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-iscsi-iscsi-a 2026-03-10T15:01:12.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:11 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T15:01:12.191 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-10T15:01:12.192 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: Stopped Ceph iscsi.iscsi.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T15:01:12.192 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.192 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.192 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: Stopping Ceph grafana.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:01:12.470 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.iscsi.iscsi.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.470 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.744 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:12.744 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[50670]: logger=server t=2026-03-10T15:01:12.468446497Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T15:01:12.744 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[50670]: logger=tracing t=2026-03-10T15:01:12.468668302Z level=info msg="Closing tracing" 2026-03-10T15:01:12.744 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[50670]: logger=ticker t=2026-03-10T15:01:12.468866623Z level=info msg=stopped last_tick=2026-03-10T15:01:10Z 2026-03-10T15:01:12.744 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[50670]: logger=grafana-apiserver t=2026-03-10T15:01:12.469161435Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T15:01:12.744 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[50670]: logger=sqlstore.transactions t=2026-03-10T15:01:12.479575276Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T15:01:12.745 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 bash[53524]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-grafana-a 2026-03-10T15:01:12.745 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@grafana.a.service: Deactivated successfully. 2026-03-10T15:01:12.745 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: Stopped Ceph grafana.a for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T15:01:12.745 INFO:journalctl@ceph.grafana.a.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:13.006 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:12 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:13.006 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 systemd[1]: Stopping Ceph node-exporter.b for 93bd26bc-1c8f-11f1-8404-610ce866bde7... 2026-03-10T15:01:13.262 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 bash[53689]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7-node-exporter-b 2026-03-10T15:01:13.263 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T15:01:13.263 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 systemd[1]: ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T15:01:13.263 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 systemd[1]: Stopped Ceph node-exporter.b for 93bd26bc-1c8f-11f1-8404-610ce866bde7. 2026-03-10T15:01:13.527 INFO:journalctl@ceph.node-exporter.b.vm03.stdout:Mar 10 15:01:13 vm03 systemd[1]: /etc/systemd/system/ceph-93bd26bc-1c8f-11f1-8404-610ce866bde7@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T15:01:13.982 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T15:01:13.990 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T15:01:13.990 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T15:01:13.991 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T15:01:13.998 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T15:01:13.998 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm00/crash 2026-03-10T15:01:13.998 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash -- . 2026-03-10T15:01:14.041 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash: Cannot open: No such file or directory 2026-03-10T15:01:14.041 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T15:01:14.042 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm03/crash 2026-03-10T15:01:14.042 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash -- . 2026-03-10T15:01:14.050 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/crash: Cannot open: No such file or directory 2026-03-10T15:01:14.050 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-10T15:01:14.050 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T15:01:14.050 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(PG_' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | head -n 1 2026-03-10T15:01:14.094 INFO:tasks.cephadm:Compressing logs... 2026-03-10T15:01:14.095 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T15:01:14.139 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T15:01:14.146 INFO:teuthology.orchestra.run.vm00.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T15:01:14.146 INFO:teuthology.orchestra.run.vm00.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T15:01:14.147 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.3.log 2026-03-10T15:01:14.147 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log 2026-03-10T15:01:14.148 INFO:teuthology.orchestra.run.vm03.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T15:01:14.149 INFO:teuthology.orchestra.run.vm03.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T15:01:14.149 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.x.log 2026-03-10T15:01:14.149 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log 2026-03-10T15:01:14.150 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.3.log: 90.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T15:01:14.151 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.c.log 2026-03-10T15:01:14.153 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log: 92.8% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log.gz 2026-03-10T15:01:14.154 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.b.log 2026-03-10T15:01:14.155 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.1.log 2026-03-10T15:01:14.155 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.y.log 2026-03-10T15:01:14.156 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log: 91.5% 87.1% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.x.log.gz 2026-03-10T15:01:14.156 INFO:teuthology.orchestra.run.vm03.stderr: -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.log.gz 2026-03-10T15:01:14.156 INFO:teuthology.orchestra.run.vm03.stderr: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T15:01:14.157 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.5.log 2026-03-10T15:01:14.157 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.7.log 2026-03-10T15:01:14.163 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.b.log: /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.6.log 2026-03-10T15:01:14.163 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.a.log 2026-03-10T15:01:14.163 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.y.log: /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.2.log 2026-03-10T15:01:14.179 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log 2026-03-10T15:01:14.179 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log 2026-03-10T15:01:14.187 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log 2026-03-10T15:01:14.187 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log 2026-03-10T15:01:14.189 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log: 90.5% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log.gz 2026-03-10T15:01:14.189 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log 2026-03-10T15:01:14.193 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log: 94.2% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.audit.log.gz 2026-03-10T15:01:14.193 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-client.rgw.foo.a.log 2026-03-10T15:01:14.203 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.4.log 2026-03-10T15:01:14.207 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log: 80.1% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log.gz 2026-03-10T15:01:14.207 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log 2026-03-10T15:01:14.207 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-client.rgw.foo.a.log: 59.4% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-client.rgw.foo.a.log.gz 2026-03-10T15:01:14.210 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/tcmu-runner.log 2026-03-10T15:01:14.211 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.0.log 2026-03-10T15:01:14.213 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.4.log: 96.2% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log.gz 2026-03-10T15:01:14.218 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph.cephadm.log.gz 2026-03-10T15:01:14.223 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/tcmu-runner.log: 73.5% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/tcmu-runner.log.gz 2026-03-10T15:01:14.235 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.0.log: 96.1% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-volume.log.gz 2026-03-10T15:01:14.535 INFO:teuthology.orchestra.run.vm00.stderr: 89.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mgr.y.log.gz 2026-03-10T15:01:14.677 INFO:teuthology.orchestra.run.vm03.stderr: 92.4% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.b.log.gz 2026-03-10T15:01:14.694 INFO:teuthology.orchestra.run.vm00.stderr: 92.2% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.c.log.gz 2026-03-10T15:01:15.554 INFO:teuthology.orchestra.run.vm00.stderr: 91.4% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-mon.a.log.gz 2026-03-10T15:01:16.739 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.6.log.gz 2026-03-10T15:01:16.768 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.5.log.gz 2026-03-10T15:01:16.830 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.2.log.gz 2026-03-10T15:01:16.886 INFO:teuthology.orchestra.run.vm03.stderr: 94.8% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.4.log.gz 2026-03-10T15:01:16.913 INFO:teuthology.orchestra.run.vm03.stderr: 94.9% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.7.log.gz 2026-03-10T15:01:16.914 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T15:01:16.914 INFO:teuthology.orchestra.run.vm03.stderr:real 0m2.772s 2026-03-10T15:01:16.914 INFO:teuthology.orchestra.run.vm03.stderr:user 0m5.181s 2026-03-10T15:01:16.914 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.322s 2026-03-10T15:01:17.125 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.1.log.gz 2026-03-10T15:01:17.147 INFO:teuthology.orchestra.run.vm00.stderr: 94.8% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.0.log.gz 2026-03-10T15:01:17.173 INFO:teuthology.orchestra.run.vm00.stderr: 94.7% -- replaced with /var/log/ceph/93bd26bc-1c8f-11f1-8404-610ce866bde7/ceph-osd.3.log.gz 2026-03-10T15:01:17.174 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T15:01:17.174 INFO:teuthology.orchestra.run.vm00.stderr:real 0m3.033s 2026-03-10T15:01:17.174 INFO:teuthology.orchestra.run.vm00.stderr:user 0m5.682s 2026-03-10T15:01:17.174 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.342s 2026-03-10T15:01:17.174 INFO:tasks.cephadm:Archiving logs... 2026-03-10T15:01:17.174 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm00/log 2026-03-10T15:01:17.174 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T15:01:17.445 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm03/log 2026-03-10T15:01:17.445 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T15:01:17.665 INFO:tasks.cephadm:Removing cluster... 2026-03-10T15:01:17.665 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 --force 2026-03-10T15:01:17.758 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T15:01:19.050 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 93bd26bc-1c8f-11f1-8404-610ce866bde7 --force 2026-03-10T15:01:19.145 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 93bd26bc-1c8f-11f1-8404-610ce866bde7 2026-03-10T15:01:20.436 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T15:01:20.437 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T15:01:20.440 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T15:01:20.459 INFO:tasks.cephadm:Teardown complete 2026-03-10T15:01:20.459 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T15:01:20.497 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T15:01:20.519 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T15:01:20.520 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T15:01:20.537 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T15:01:20.541 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T15:01:20.547 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T15:01:20.547 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T15:01:20.635 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:20.637 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:20.821 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:20.822 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:20.834 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:21.074 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:21.074 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:21.080 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T15:01:21.080 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:21.080 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:21.080 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:21.080 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T15:01:21.081 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:21.093 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:21.094 INFO:teuthology.orchestra.run.vm00.stdout: ceph* 2026-03-10T15:01:21.100 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:21.102 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-10T15:01:21.291 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:21.291 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T15:01:21.362 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:21.362 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T15:01:21.389 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T15:01:21.392 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:21.411 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T15:01:21.414 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:22.621 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:22.657 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:22.734 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:22.768 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:22.867 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:22.868 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:22.983 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:22.984 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:23.089 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:23.090 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:23.090 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T15:01:23.090 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:23.105 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:23.105 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T15:01:23.208 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:23.208 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:23.209 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T15:01:23.209 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:23.223 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:23.224 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T15:01:23.294 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T15:01:23.294 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T15:01:23.327 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T15:01:23.329 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:23.347 INFO:teuthology.orchestra.run.vm00.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:23.380 INFO:teuthology.orchestra.run.vm00.stdout:Looking for files to backup/remove ... 2026-03-10T15:01:23.381 INFO:teuthology.orchestra.run.vm00.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T15:01:23.385 INFO:teuthology.orchestra.run.vm00.stdout:Removing user `cephadm' ... 2026-03-10T15:01:23.385 INFO:teuthology.orchestra.run.vm00.stdout:Warning: group `nogroup' has no more members. 2026-03-10T15:01:23.399 INFO:teuthology.orchestra.run.vm00.stdout:Done. 2026-03-10T15:01:23.402 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T15:01:23.402 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T15:01:23.427 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:23.444 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T15:01:23.446 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:23.467 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:23.498 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-10T15:01:23.500 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T15:01:23.503 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-10T15:01:23.503 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-10T15:01:23.517 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-10T15:01:23.539 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T15:01:23.541 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:23.545 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:23.824 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T15:01:23.825 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:24.674 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:24.713 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:24.925 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:24.925 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:24.954 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:24.989 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:25.133 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:25.133 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:25.134 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T15:01:25.134 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:25.153 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:25.155 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds* 2026-03-10T15:01:25.216 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:25.226 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:25.335 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:25.335 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T15:01:25.335 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T15:01:25.336 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:25.343 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:25.343 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-10T15:01:25.406 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:25.406 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T15:01:25.448 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T15:01:25.451 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:25.512 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:25.512 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T15:01:25.555 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T15:01:25.556 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:25.898 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:26.006 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T15:01:26.008 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:26.009 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:26.114 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T15:01:26.117 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:27.682 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:27.717 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:27.720 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:27.758 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:27.889 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:27.889 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:27.963 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:27.963 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:28.119 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:28.119 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T15:01:28.120 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:28.121 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev 2026-03-10T15:01:28.121 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:28.134 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:28.134 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T15:01:28.135 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents* 2026-03-10T15:01:28.194 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:28.194 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev 2026-03-10T15:01:28.195 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:28.209 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:28.209 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T15:01:28.210 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-10T15:01:28.323 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 4 to remove and 12 not upgraded. 2026-03-10T15:01:28.323 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T15:01:28.366 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T15:01:28.369 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.381 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.394 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 12 not upgraded. 2026-03-10T15:01:28.394 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T15:01:28.411 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.441 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T15:01:28.443 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.454 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.454 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.481 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.524 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:28.988 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T15:01:28.990 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:29.047 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T15:01:29.049 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:30.616 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:30.651 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:30.699 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:30.734 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:30.862 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:30.862 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:30.941 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:30.941 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:31.023 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:31.023 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:31.023 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:31.024 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:31.037 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:31.039 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T15:01:31.058 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:31.058 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:31.059 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:31.059 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:31.059 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:31.060 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:31.076 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:31.077 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T15:01:31.231 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T15:01:31.232 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T15:01:31.242 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T15:01:31.242 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T15:01:31.268 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T15:01:31.269 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:31.278 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T15:01:31.280 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:31.331 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:31.342 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:31.776 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:31.793 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:32.219 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:32.236 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:32.743 INFO:teuthology.orchestra.run.vm00.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:32.836 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.241 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.268 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.321 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.350 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.711 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:33.745 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:33.784 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:33.819 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:33.820 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T15:01:33.822 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:33.887 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T15:01:33.889 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:34.480 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:34.505 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:34.940 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:34.946 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:35.414 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:35.414 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:35.831 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:35.863 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:37.483 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:37.485 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:37.520 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:37.523 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:37.673 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:37.673 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:37.738 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:37.739 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:37.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:37.891 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:37.891 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:37.891 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:37.902 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:37.903 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:37.915 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:37.916 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:37.928 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:37.929 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse* 2026-03-10T15:01:38.074 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:38.075 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T15:01:38.107 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T15:01:38.108 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:38.108 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T15:01:38.109 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:38.147 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T15:01:38.150 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:38.563 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:38.594 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:38.668 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T15:01:38.670 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:38.698 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T15:01:38.701 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:40.200 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:40.235 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:40.351 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:40.386 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:40.410 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:40.411 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:40.557 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T15:01:40.557 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:40.557 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:40.557 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:40.558 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:40.577 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:40.577 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:40.588 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:40.589 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:40.609 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:40.736 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:40.756 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:40.756 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:40.793 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:40.795 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:40.795 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:40.904 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T15:01:40.904 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:40.904 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:40.905 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:40.920 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:40.920 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:40.953 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:40.980 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:40.980 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:41.183 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:41.184 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:41.216 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T15:01:41.216 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:41.216 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:41.216 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:41.217 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:41.233 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:41.233 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:41.266 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:41.328 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:41.343 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:41.344 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:41.375 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:41.457 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:41.457 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:41.574 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:41.575 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:41.607 INFO:teuthology.orchestra.run.vm00.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T15:01:41.607 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:41.607 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:41.607 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T15:01:41.608 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:41.624 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:41.624 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:41.658 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:41.736 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:41.736 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:41.736 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:41.736 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:41.737 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:41.749 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:41.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T15:01:41.861 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:41.861 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:41.913 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 12 not upgraded. 2026-03-10T15:01:41.913 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T15:01:41.954 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T15:01:41.957 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:41.970 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:41.982 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:41.998 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:41.998 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:41.998 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:41.998 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:41.999 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:42.015 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:42.015 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T15:01:42.196 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 3 to remove and 12 not upgraded. 2026-03-10T15:01:42.196 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T15:01:42.237 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T15:01:42.239 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:42.253 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:42.266 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:43.181 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:43.215 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:43.413 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:43.414 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:43.494 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:43.532 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:43.609 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:43.610 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:43.611 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:43.611 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:43.611 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:43.611 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:43.611 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:43.624 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:43.625 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:43.660 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:43.754 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:43.755 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:43.820 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:43.821 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:43.964 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T15:01:43.965 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:43.965 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:43.965 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:43.965 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:43.966 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:43.996 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T15:01:43.996 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:43.996 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:43.997 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:43.998 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:44.001 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:44.002 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:44.019 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:44.020 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:44.034 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:44.053 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:44.205 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:44.205 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:44.212 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:44.213 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:44.430 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:44.430 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:44.430 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:44.430 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:44.431 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T15:01:44.431 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:44.431 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:44.431 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:44.431 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:44.432 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:44.446 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:44.446 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:44.450 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:44.450 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-10T15:01:44.481 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:44.631 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:44.631 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T15:01:44.667 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:44.668 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:44.676 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T15:01:44.678 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:44.785 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:44.799 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:44.799 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd* 2026-03-10T15:01:44.986 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T15:01:44.986 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T15:01:45.022 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T15:01:45.023 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:45.867 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:45.902 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:46.114 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:46.115 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:46.296 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:46.331 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:46.342 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:46.343 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:46.343 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:46.343 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:46.343 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:46.344 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:46.359 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:46.360 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-10T15:01:46.541 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:46.542 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:46.546 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T15:01:46.546 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T15:01:46.590 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T15:01:46.594 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:46.606 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:46.631 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:46.790 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:46.790 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:46.791 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:46.792 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:46.807 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:46.808 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev* libcephfs2* 2026-03-10T15:01:46.992 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T15:01:46.992 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T15:01:47.035 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T15:01:47.037 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:47.050 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:47.075 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:47.933 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:47.966 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:48.164 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:48.164 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:48.262 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T15:01:48.262 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:48.262 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:48.262 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:48.262 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T15:01:48.263 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:48.278 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:48.278 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:48.313 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:48.364 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:48.400 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:48.533 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:48.534 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:48.606 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:48.606 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:48.745 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:48.746 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:48.761 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:48.761 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T15:01:48.762 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T15:01:48.811 INFO:teuthology.orchestra.run.vm00.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T15:01:48.811 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:48.811 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:48.811 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T15:01:48.811 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:48.812 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:48.813 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T15:01:48.813 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T15:01:48.813 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:48.842 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:48.842 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:48.881 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:48.951 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T15:01:48.951 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T15:01:48.992 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T15:01:48.994 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.007 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.019 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.030 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T15:01:49.090 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:49.090 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:49.264 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:49.265 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:49.281 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:49.281 INFO:teuthology.orchestra.run.vm00.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T15:01:49.281 INFO:teuthology.orchestra.run.vm00.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T15:01:49.464 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T15:01:49.464 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T15:01:49.481 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.494 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.508 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.508 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T15:01:49.511 INFO:teuthology.orchestra.run.vm00.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.523 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.534 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:49.537 INFO:teuthology.orchestra.run.vm00.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:49.549 INFO:teuthology.orchestra.run.vm00.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T15:01:49.643 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:49.719 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T15:01:49.721 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T15:01:50.176 INFO:teuthology.orchestra.run.vm00.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:50.188 INFO:teuthology.orchestra.run.vm00.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:50.202 INFO:teuthology.orchestra.run.vm00.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:50.230 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:50.273 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:50.349 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T15:01:50.352 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T15:01:51.418 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:51.454 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:51.665 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:51.665 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:51.801 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:51.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:51.803 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:51.830 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:51.830 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:51.864 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:51.989 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:52.025 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:52.058 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:52.058 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:52.195 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:52.196 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:52.196 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:52.196 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:52.216 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:52.217 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:52.218 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T15:01:52.233 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T15:01:52.237 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:52.237 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:52.316 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:52.470 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:52.471 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:52.498 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:52.498 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:52.532 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:52.544 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T15:01:52.545 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T15:01:52.751 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T15:01:52.751 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:52.751 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:52.751 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:52.751 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:52.752 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:52.755 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:52.756 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:52.962 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T15:01:52.976 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T15:01:52.977 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:01:52.978 DEBUG:teuthology.orchestra.run.vm00:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T15:01:52.990 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T15:01:53.066 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:01:53.088 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 87 to remove and 12 not upgraded. 2026-03-10T15:01:53.088 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T15:01:53.162 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T15:01:53.163 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:53.195 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T15:01:53.208 INFO:teuthology.orchestra.run.vm03.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T15:01:53.220 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T15:01:53.233 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T15:01:53.245 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T15:01:53.257 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.267 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T15:01:53.267 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T15:01:53.269 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.281 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.302 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T15:01:53.312 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T15:01:53.323 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.333 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.343 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.355 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.367 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T15:01:53.379 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T15:01:53.392 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T15:01:53.406 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T15:01:53.432 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T15:01:53.444 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T15:01:53.457 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T15:01:53.466 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T15:01:53.467 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T15:01:53.469 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T15:01:53.481 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T15:01:53.494 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T15:01:53.506 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T15:01:53.517 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T15:01:53.529 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:53.537 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T15:01:53.547 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:53.568 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:53.580 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T15:01:53.592 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T15:01:53.603 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T15:01:53.618 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T15:01:53.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T15:01:53.659 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 87 to remove and 12 not upgraded. 2026-03-10T15:01:53.659 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T15:01:53.694 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T15:01:53.695 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:53.711 INFO:teuthology.orchestra.run.vm00.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T15:01:53.723 INFO:teuthology.orchestra.run.vm00.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T15:01:53.735 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T15:01:53.758 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T15:01:53.770 INFO:teuthology.orchestra.run.vm00.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T15:01:53.781 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.792 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.804 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T15:01:53.824 INFO:teuthology.orchestra.run.vm00.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T15:01:53.836 INFO:teuthology.orchestra.run.vm00.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T15:01:53.849 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.861 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.874 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.886 INFO:teuthology.orchestra.run.vm00.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T15:01:53.898 INFO:teuthology.orchestra.run.vm00.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T15:01:53.911 INFO:teuthology.orchestra.run.vm00.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T15:01:53.923 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T15:01:53.935 INFO:teuthology.orchestra.run.vm00.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T15:01:53.963 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T15:01:53.976 INFO:teuthology.orchestra.run.vm00.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T15:01:53.987 INFO:teuthology.orchestra.run.vm00.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T15:01:54.000 INFO:teuthology.orchestra.run.vm00.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T15:01:54.011 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T15:01:54.023 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T15:01:54.035 INFO:teuthology.orchestra.run.vm00.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T15:01:54.047 INFO:teuthology.orchestra.run.vm00.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T15:01:54.059 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:54.067 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T15:01:54.073 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T15:01:54.079 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:54.100 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T15:01:54.108 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T15:01:54.113 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T15:01:54.127 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T15:01:54.136 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T15:01:54.142 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T15:01:54.158 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T15:01:54.178 INFO:teuthology.orchestra.run.vm00.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T15:01:54.194 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T15:01:54.244 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T15:01:54.297 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T15:01:54.348 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T15:01:54.361 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T15:01:54.419 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T15:01:54.578 INFO:teuthology.orchestra.run.vm00.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T15:01:54.610 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T15:01:54.635 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T15:01:54.687 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T15:01:54.693 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T15:01:54.741 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T15:01:54.744 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T15:01:54.791 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:54.798 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T15:01:54.843 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:54.849 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T15:01:54.860 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T15:01:54.901 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T15:01:54.918 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T15:01:55.037 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T15:01:55.091 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T15:01:55.145 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T15:01:55.187 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T15:01:55.197 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T15:01:55.240 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T15:01:55.246 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T15:01:55.289 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:55.295 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T15:01:55.337 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T15:01:55.347 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T15:01:55.397 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T15:01:55.398 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T15:01:55.463 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T15:01:55.520 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T15:01:55.535 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T15:01:55.635 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T15:01:55.640 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T15:01:55.799 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T15:01:55.802 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T15:01:55.849 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T15:01:55.857 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T15:01:55.900 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T15:01:55.910 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T15:01:55.948 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T15:01:55.975 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T15:01:55.996 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T15:01:56.028 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T15:01:56.079 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T15:01:56.124 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T15:01:56.129 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T15:01:56.186 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T15:01:56.186 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T15:01:56.237 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T15:01:56.241 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T15:01:56.287 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T15:01:56.296 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T15:01:56.338 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T15:01:56.353 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T15:01:56.402 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T15:01:56.403 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T15:01:56.453 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T15:01:56.459 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T15:01:56.507 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T15:01:56.511 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T15:01:56.534 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T15:01:56.562 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T15:01:56.585 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T15:01:56.617 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T15:01:56.642 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T15:01:56.672 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T15:01:56.696 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T15:01:56.726 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T15:01:56.745 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T15:01:56.778 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T15:01:56.797 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T15:01:56.831 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T15:01:56.849 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T15:01:56.888 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T15:01:56.905 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T15:01:56.943 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T15:01:56.955 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T15:01:56.970 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T15:01:56.978 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T15:01:57.023 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T15:01:57.072 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T15:01:57.122 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T15:01:57.242 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T15:01:57.296 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T15:01:57.344 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T15:01:57.377 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T15:01:57.390 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T15:01:57.399 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T15:01:57.411 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T15:01:57.429 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-10T15:01:57.451 INFO:teuthology.orchestra.run.vm00.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T15:01:57.454 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:57.464 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:57.475 INFO:teuthology.orchestra.run.vm00.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T15:01:57.507 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T15:01:57.515 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T15:01:57.535 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T15:01:57.896 INFO:teuthology.orchestra.run.vm00.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T15:01:57.908 INFO:teuthology.orchestra.run.vm00.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T15:01:57.926 INFO:teuthology.orchestra.run.vm00.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T15:01:57.944 INFO:teuthology.orchestra.run.vm00.stdout:Removing zip (3.0-12build2) ... 2026-03-10T15:01:57.969 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T15:01:57.980 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T15:01:58.027 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T15:01:58.035 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T15:01:58.054 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T15:01:59.177 INFO:teuthology.orchestra.run.vm03.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T15:01:59.178 INFO:teuthology.orchestra.run.vm03.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T15:01:59.612 INFO:teuthology.orchestra.run.vm00.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T15:01:59.613 INFO:teuthology.orchestra.run.vm00.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T15:02:01.266 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:02:01.268 DEBUG:teuthology.parallel:result is None 2026-03-10T15:02:01.854 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T15:02:01.857 DEBUG:teuthology.parallel:result is None 2026-03-10T15:02:01.857 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-10T15:02:01.857 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-10T15:02:01.857 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T15:02:01.857 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T15:02:01.865 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T15:02:01.906 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T15:02:02.055 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T15:02:02.059 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T15:02:02.067 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T15:02:02.085 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T15:02:02.087 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T15:02:02.096 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T15:02:02.152 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T15:02:02.189 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T15:02:03.226 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T15:02:03.240 DEBUG:teuthology.parallel:result is None 2026-03-10T15:02:03.248 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T15:02:03.262 DEBUG:teuthology.parallel:result is None 2026-03-10T15:02:03.263 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T15:02:03.265 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T15:02:03.265 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T15:02:03.266 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T15:02:03.420 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T15:02:03.420 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T15:02:03.420 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.420 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-ntp2.kernfusion 192.53.103.108 2 u 64 64 377 29.766 +0.463 1.286 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-mail.klausen.dk 193.79.237.14 2 u 62 64 377 23.563 +0.106 0.211 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:#pve2.h4x-gamers 192.53.103.108 2 u 63 64 377 25.038 +0.607 0.070 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-47.ip-51-75-67. 225.254.30.190 4 u 53 64 377 21.192 +1.059 0.091 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-static.46.170.2 188.40.142.18 3 u 54 64 377 25.015 +0.250 0.131 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:+ntp1.intra2net. 80.72.67.48 2 u 55 64 377 20.385 +0.612 0.172 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:+node-4.infogral 168.239.11.197 2 u 59 64 377 23.528 +0.688 0.106 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-vps-ber1.orlean 127.65.222.189 2 u 51 64 377 28.831 +1.036 0.080 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:+mail.anyvm.tech 129.69.253.17 2 u 56 64 377 23.532 +0.623 0.151 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:-server1b.meinbe 193.158.22.13 2 u 63 64 377 23.560 +0.805 0.083 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:#ec2-3-121-254-2 237.17.204.95 2 u 55 64 377 24.159 +0.029 0.150 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:*static.222.16.4 35.73.197.144 2 u 54 64 377 0.343 +0.572 0.098 2026-03-10T15:02:03.421 INFO:teuthology.orchestra.run.vm03.stdout:#185.232.69.65 ( .PHC0. 1 u 59 64 377 28.287 -2.044 0.147 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:#node-4.infogral 168.239.11.197 2 u 57 64 377 23.536 -3.182 0.638 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:-mail.anyvm.tech 129.69.253.17 2 u 59 64 377 23.525 -2.297 0.733 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:-v22025082392863 129.69.253.1 2 u 61 64 377 28.682 -4.786 1.202 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:+mail.klausen.dk 193.79.237.14 2 u 65 64 377 23.553 -3.186 0.492 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:*static.222.16.4 35.73.197.144 2 u 63 64 377 0.381 -3.286 0.651 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:+47.ip-51-75-67. 225.254.30.190 4 u 1 64 377 21.222 -2.840 0.449 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:-static.46.170.2 188.40.142.18 3 u 57 64 377 24.982 -3.494 0.624 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:#ntp2.kernfusion 192.53.103.108 2 u 54 64 377 29.946 -1.666 1.756 2026-03-10T15:02:03.732 INFO:teuthology.orchestra.run.vm00.stdout:#ntp.ntstime.org 131.188.3.222 2 u 60 64 377 28.279 -5.356 0.515 2026-03-10T15:02:03.733 INFO:teuthology.orchestra.run.vm00.stdout:+vps-ber1.orlean 127.65.222.189 2 u 57 64 377 28.922 -2.628 0.540 2026-03-10T15:02:03.733 INFO:teuthology.orchestra.run.vm00.stdout:-185.125.190.57 194.121.207.249 2 u 26 64 377 35.266 -4.749 0.544 2026-03-10T15:02:03.733 INFO:teuthology.orchestra.run.vm00.stdout:#ec2-3-121-254-2 237.17.204.95 2 u 50 64 377 24.125 -2.513 1.071 2026-03-10T15:02:03.733 INFO:teuthology.orchestra.run.vm00.stdout:-185.232.69.65 ( .PHC0. 1 u 55 64 377 28.721 -5.617 0.511 2026-03-10T15:02:03.733 INFO:teuthology.orchestra.run.vm00.stdout:+185.125.190.56 79.243.60.50 2 u 31 64 377 32.036 -2.990 0.421 2026-03-10T15:02:03.733 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T15:02:03.735 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T15:02:03.735 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T15:02:03.738 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T15:02:03.740 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T15:02:03.742 INFO:teuthology.task.internal:Duration was 1282.539016 seconds 2026-03-10T15:02:03.742 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T15:02:03.744 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T15:02:03.745 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T15:02:03.746 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T15:02:03.773 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T15:02:03.773 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T15:02:03.773 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T15:02:03.826 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-10T15:02:03.826 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T15:02:03.838 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T15:02:03.838 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T15:02:03.867 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T15:02:03.967 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T15:02:03.967 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T15:02:03.968 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T15:02:03.974 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T15:02:03.975 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T15:02:03.975 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T15:02:03.975 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T15:02:03.975 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T15:02:03.975 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T15:02:03.976 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T15:02:03.976 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T15:02:03.977 INFO:teuthology.orchestra.run.vm03.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T15:02:03.977 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T15:02:03.990 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 90.6% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T15:02:03.992 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 92.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T15:02:03.993 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T15:02:03.996 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T15:02:03.996 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T15:02:04.043 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T15:02:04.051 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T15:02:04.053 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T15:02:04.087 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T15:02:04.093 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T15:02:04.103 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-10T15:02:04.112 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T15:02:04.145 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T15:02:04.145 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T15:02:04.157 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T15:02:04.157 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T15:02:04.161 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T15:02:04.161 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm00 2026-03-10T15:02:04.161 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T15:02:04.196 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1070/remote/vm03 2026-03-10T15:02:04.196 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T15:02:04.208 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T15:02:04.208 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T15:02:04.239 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T15:02:04.254 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T15:02:04.257 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T15:02:04.257 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T15:02:04.260 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T15:02:04.260 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T15:02:04.283 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T15:02:04.285 INFO:teuthology.orchestra.run.vm00.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 15:02 /home/ubuntu/cephtest 2026-03-10T15:02:04.298 INFO:teuthology.orchestra.run.vm03.stdout: 258207 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 15:02 /home/ubuntu/cephtest 2026-03-10T15:02:04.299 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T15:02:04.306 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} duration: 1282.539016008377 flavor: default owner: kyr success: true 2026-03-10T15:02:04.306 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T15:02:04.330 INFO:teuthology.run:pass